Kubernetes容器集群部署节点组件五)

master端下载kubernetes组件:

wget https://storage.googleapis.com/kubernetes-release/release/v1.9.2/kubernetes-server-linux-amd64.tar.gz

node端下工kubernetes node组件:

wget https://dl.k8s.io/v1.9.2/kubernetes-node-linux-amd64.tar.gz

部署master组件

master操作:

把二制文件移动到bin下

[[email protected] bin]# pwd
/root/master_pkg/kubernetes/server/bin

[[email protected] bin]# cp kube-controller-manager kube-scheduler kube-apiserver /opt/kubernetes/bin/

[[email protected] bin]# chmod +x /opt/kubernetes/bin/*

添加apiserver.sh脚本

#!/bin/bash

MASTER_ADDRESS=${1:-"192.168.1.195"}
ETCD_SERVERS=${2:-"http://127.0.0.1:2379"}

cat <<EOF >/opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \--etcd-servers=${ETCD_SERVERS} \--insecure-bind-address=127.0.0.1 \--bind-address=${MASTER_ADDRESS} \--insecure-port=8080 \--secure-port=6443 \--advertise-address=${MASTER_ADDRESS} \--allow-privileged=true \--service-cluster-ip-range=10.10.10.0/24 \--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node \--kubelet-https=true \--enable-bootstrap-token-auth \--token-auth-file=/opt/kubernetes/cfg/token.csv \--service-node-port-range=30000-50000 \--tls-cert-file=/opt/kubernetes/ssl/server.pem  \--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \--client-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \--etcd-cafile=/opt/kubernetes/ssl/ca.pem \--etcd-certfile=/opt/kubernetes/ssl/server.pem \--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

apiserver.sh

执行apiserver.sh脚本:

[[email protected] bin]# ./apiserver.sh 192.168.1.101 https://192.168.1.101:2379,https://192.168.1.102:2379,https://192.168.1.103:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

将token.csv放到cfg目录下

cp /opt/kubernetes/ssl/token.csv /opt/kubernetes/cfg/

启动kube-apiserver

[[email protected] bin]# systemctl start kube-apiserver

添加controller-manager.sh控制器脚本

#!/bin/bash

MASTER_ADDRESS=${1:-"127.0.0.1"}

cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \--master=${MASTER_ADDRESS}:8080 \--leader-elect=true \--address=127.0.0.1 \--service-cluster-ip-range=10.10.10.0/24 \--cluster-name=kubernetes \--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \--root-ca-file=/opt/kubernetes/ssl/ca.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

controller-manager.sh

执行脚本:

[[email protected] bin]# ./controller-manager.sh 127.0.0.1 

查看服务是否启动

[[email protected] bin]# ps -ef | grep controller-manager
root      16464      1 10 14:34 ?        00:00:01 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.10.10.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem

添加scheduler.sh脚本

#!/bin/bash

MASTER_ADDRESS=${1:-"127.0.0.1"}

cat <<EOF >/opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \--master=${MASTER_ADDRESS}:8080 \--leader-elect"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

scheduler

执行脚本

[[email protected] bin]# ./scheduler.sh 127.0.0.1

查看服务是否启动

[[email protected] bin]# ps -ef | grep scheduler
root      16531      1  4 14:37 ?        00:00:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect

查看节点状态

[[email protected] bin]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health": "true"}
etcd-2               Healthy   {"health": "true"}
etcd-1               Healthy   {"health": "true"}   

部署node节点

将master节点生成的kubeconfig文件传到两个节点的cfg目录下

/opt/kubernetes/ssl
[[email protected] ssl]# scp *kubeconfig [email protected]192.168.1.102:/opt/kubernetes/cfg/
[[email protected] ssl]# scp *kubeconfig [email protected]192.168.1.103:/opt/kubernetes/cfg/

node1节点操作:

解压kubernetes-node-linux-amd64.tar.gz 包

[[email protected] node_pkg]# tar xvf kubernetes-node-linux-amd64.tar.gz 

将解压出来的二制移到bin下

[[email protected] bin]# cp kubelet kube-proxy /opt/kubernetes/bin/[[email protected] bin]# chmod +x /opt/kubernetes/bin/*

添加kubelet.sh脚本

#!/bin/bash

NODE_ADDRESS=${1:-"192.168.1.196"}
DNS_SERVER_IP=${2:-"10.10.10.2"}

cat <<EOF >/opt/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \\
--v=4 \--address=${NODE_ADDRESS} \--hostname-override=${NODE_ADDRESS} \--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \--cert-dir=/opt/kubernetes/ssl \--allow-privileged=true \--cluster-dns=${DNS_SERVER_IP} \--cluster-domain=cluster.local \--fail-swap-on=false \--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

EOF

cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

kubelet.sh

执行脚本

[[email protected] bin]# ./kubelet.sh 192.168.0.102 10.10.10.2
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

备注:192.168.0.102为你当前节点的IP   10.10.10.2为你的DNS地址

查看kubelete是否启动

发现有错误日志,创建证权限拒绝

error: failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope: clusterrole.rbac.authorization.k8s.io "system:node-bootstrap" not found

解决方法

在master端创建权限分配角色

[[email protected] ssl]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

node节点再次启动kubelet

创建proxy.sh脚本

#!/bin/bash

NODE_ADDRESS=${1:-"192.168.1.200"}

cat <<EOF >/opt/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 --hostname-override=${NODE_ADDRESS} --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy

proxy.sh

执行脚本

[[email protected] ssl]# ./proxy.sh 192.168.1.102

备注:192.168.1.102是当前节点的地址

在master查看节点请求信息:

[[email protected] ssl]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-iVbj9CKPaWhh7VAQfqK16Xz9in4-Byb_XZaDJLz3zfw   11m       kubelet-bootstrap   Pending

允许自签证书请求连接

[[email protected] ssl]# kubectl certificate approve node-csr-iVbj9CKPaWhh7VAQfqK16Xz9in4-Byb_XZaDJLz3zfw

再次查看连接:

[[email protected] ssl]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-iVbj9CKPaWhh7VAQfqK16Xz9in4-Byb_XZaDJLz3zfw   14m       kubelet-bootstrap   Approved,Issued

查看Node为准备状态

[[email protected] ssl]# kubectl get node
NAME            STATUS    ROLES     AGE       VERSION
192.168.1.102   Ready     <none>    1m        v1.9.2

node2节点操作:

把Node1节点的文个拷到node2节点,或者重复node1节点步骤

[[email protected] ssl]# scp -r /opt/kubernetes/bin [email protected]192.168.1.103:/opt/kubernetes

[[email protected] ssl]# scp -r /opt/kubernetes/cfg [email protected]192.168.1.103:/opt/kubernetes

[[email protected] ssl]# scp /usr/lib/systemd/system/kubelet.service [email protected]:/usr/lib/systemd/system/

[[email protected] ssl]# scp /usr/lib/systemd/system/kube-proxy.service [email protected]:/usr/lib/systemd/system/

修改node2节点cfg下kubelet配置文件的ip改为当前节点IP

KUBELET_OPTS="--logtostderr=true \
--v=4 --address=192.168.1.103 --hostname-override=192.168.1.103 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --cert-dir=/opt/kubernetes/ssl --allow-privileged=true --cluster-dns=10.10.10.2 --cluster-domain=cluster.local --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

修改node2节点cfg下kube-proxy配置文件的ip改为当前节点IP

KUBE_PROXY_OPTS="--logtostderr=true --v=4 --hostname-override=192.168.1.103 --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

启动服务

[[email protected] cfg]# systemctl start kubelet
[[email protected] cfg]# systemctl start kube-proxy

master节点查看是否有请求

[[email protected] ssl]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-OPWss8__QdJqP6QmudtkaVQWeDh278BxzP35hdeAkZI   17s       kubelet-bootstrap   Pending
node-csr-iVbj9CKPaWhh7VAQfqK16Xz9in4-Byb_XZaDJLz3zfw   28m       kubelet-bootstrap   Approved,Issued

允许自签证书连接

[[email protected] ssl]# kubectl certificate approve node-csr-OPWss8__QdJqP6QmudtkaVQWeDh278BxzP35hdeAkZI

查看节点

[[email protected] ssl]# kubectl get node
NAME            STATUS    ROLES     AGE       VERSION
192.168.1.102   Ready     <none>    15m       v1.9.2
192.168.1.103   Ready     <none>    12s       v1.9.2

测试示例

创建nginx实例:

[[email protected] ssl]# kubectl run nginx --image=nginx --replicas=3

查看Pod

[[email protected] ssl]# kubectl get pod
NAME                   READY     STATUS              RESTARTS   AGE
nginx-8586cf59-7r4zq   0/1       ContainerCreating   0          10s
nginx-8586cf59-9wpwr   0/1       ContainerCreating   0          10s
nginx-8586cf59-h2n5h   0/1       ContainerCreating   0          10s

查看资源对象

[[email protected] ssl]# kubectl get all
NAME                       READY     STATUS              RESTARTS   AGE
pod/nginx-8586cf59-7r4zq   0/1       ContainerCreating   0          1m
pod/nginx-8586cf59-9wpwr   0/1       ContainerCreating   0          1m
pod/nginx-8586cf59-h2n5h   0/1       ContainerCreating   0          1m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.10.10.1   <none>        443/TCP   1h

NAME                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/nginx   3         3         3            0           1m

NAME                                   DESIRED   CURRENT   READY     AGE
replicaset.extensions/nginx-8586cf59   3         3         0         1m

NAME                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   3         3         3            0           1m

NAME                             DESIRED   CURRENT   READY     AGE
replicaset.apps/nginx-8586cf59   3         3         0         1m

查看容器运行在哪个节点

[[email protected] ssl]# kubectl get pod -o wide
NAME                   READY     STATUS             RESTARTS   AGE       IP            NODE
nginx-8586cf59-7r4zq   0/1       ImagePullBackOff   0          7m        172.17.47.2   192.168.1.103
nginx-8586cf59-9wpwr   1/1       Running            0          7m        172.17.47.3   192.168.1.103
nginx-8586cf59-h2n5h   1/1       Running            0          7m        172.17.45.2   192.168.1.102

对外发布一个服务

[[email protected] ssl]# kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
[[email protected] ssl]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.10.10.1     <none>        443/TCP        2h
nginx        NodePort    10.10.10.130   <none>        88:34986/TCP   13s备注:88端口是提供node节点访问     34986为随机端口,外问该问

在node节点访问88这个端口

[[email protected] ssl]# curl -I 10.10.10.130:88
HTTP/1.1 200 OK
Server: nginx/1.15.2
Date: Wed, 08 Aug 2018 08:54:09 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 24 Jul 2018 13:02:29 GMT
Connection: keep-alive
ETag: "5b572365-264"
Accept-Ranges: bytes

原文地址:https://www.cnblogs.com/zhangzihong/p/9443910.html

时间: 2024-08-13 13:24:43

Kubernetes容器集群部署节点组件五)的相关文章

kubernetes容器集群部署Etcd集群

安装etcd 二进制包下载地址:https://github.com/etcd-io/etcd/releases/tag/v3.2.12 [[email protected] ~]# GOOGLE_URL=https://storage.googleapis.com/etcd [[email protected] ~]# GITHUB_URL=https://github.com/coreos/etcd/releases/download [[email protected] ~]# DOWNL

kubernetes容器集群部署Flannel网络

Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来. VXLAN:将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封并将数据发送给目的地址. Fannel:Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP.VXLAN.AWS VPC和GCE路由等数据转发方式. 多主机容器网络通信其他主流方案:隧道方案(We

Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)

0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 master 服务器的组件有:kube-apiserver.kube-controller-manager.kube-scheduler 因此需要下载k8s master,下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGE

Kubernetes(K8s)(二)——搭建Kubernetes容器集群管理系统

(1).配置说明 节点角色 IP地址 CPU 内存 master.etcd 192.168.128.110 4核 2G node1/minion1 192.168.128.111 4核 2G node2/minion2 192.168.128.112 4核 2G (2).搭建Kubernetes容器集群管理系统 1)三台主机安装常用的软件包 bash-completion可以使得按<Tab>键补齐,vim是vi编辑器的升级版,wget用于下载阿里云的yum源文件. # yum -y insta

kubernetes容器集群管理部署master节点组件

集群部署获取k8s二进制包 [[email protected] ~]# wget https://dl.k8s.io/v1.15.0/kubernetes-server-linux-amd64.tar.gz [[email protected] ~]# ls kubernetes-server-linux-amd64.tar.gz [[email protected] ~]# mkdir master [[email protected] ~]# mv kubernetes-server-li

Kubernetes容器集群管理环境 - Node节点的移除与加入

一.如何从Kubernetes集群中移除Node 比如从集群中移除k8s-node03这个Node节点,做法如下: 1)先在master节点查看Node情况 [[email protected]-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node01 Ready <none> 47d v1.14.2 k8s-node02 Ready <none> 47d v1.14.2 k8s-node03 R

10分钟快速搭建Kubernetes容器集群平台

官方提供Kubernetes部署3种方式 minikube Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用.不能用于生产环境. 官方文档:https://kubernetes.io/docs/setup/minikube/ kubeadm kubeadm可帮助你快速部署一套kubernetes集群.kubeadm设计目的为新用户开始尝试kubernetes提供一种简单的方法.目前是Beta版. 官方文档:https://

(一)Kubernetes/K8s 集群架构与组件

K8s相关概念:master/nodemaster Master 是 Cluster 的大脑,它的主要职责是调度,即决定将应用放在哪里运行,实现高可用,可以运行多个 Master.运行的相关组件:Kubernetes API Server(kube-apiserver),集群的统一入口,各组件协调者,以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储.Kubernetes Controller Manager,处理集群中常规后

kubernetes容器集群管理启动一个测试示例

创建nginx 创建3个nginx副本 [[email protected] bin]# kubectl run nginx --image=nginx --replicas=3 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create inste