kubeadm创建高可用kubernetes v1.12.0集群

节点规划

主机名 IP Role
k8s-master01 10.3.1.20 etcd、Master、Node、keepalived
k8s-master02 10.3.1.21 etcd、Master、Node、keepalived
k8s-master03 10.3.1.25 etcd、Master、Node、keepalived
VIP 10.3.1.29 None

版本信息:

  • OS::Ubuntu 16.04
  • Docker:17.03.2-ce
  • k8s:v1.12

来自官网的高可用架构图

高可用最重要的两个组件:

  1. etcd:分布式键值存储、k8s集群数据中心。
  2. kube-apiserver:集群的唯一入口,各组件通信枢纽。apiserver本身无状态,因此分布式很容易。

其它核心组件:

  • controller-manager和scheduler也可以部署多个,但只有一个处于活跃状态,以保证数据一致性。因为它们会改变集群状态。
    集群各组件都是松耦合的,如何高可用就有很多种方式了。
  • kube-apiserver有多个,那么apiserver客户端应该连接哪个了,因此就在apiserver前面加个传统的类似于haproxy+keepalived方案漂个VIP出来,apiserver客户端,比如kubelet、kube-proxy连接此VIP。

安装前准备

1、k8s各节点SSH免密登录。
2、时间同步。
3、各Node必须关闭swap:swapoff -a,否则kubelet启动失败。
4、各节点主机名和IP加入/etc/hosts解析

kubeadm创建高可用集群有两种方法:

  1. etcd集群由kubeadm配置并运行于pod,启动在Master节点之上。
  2. etcd集群单独部署。
    etcd集群单独部署,似乎更容易些,这里就以这种方法来部署。

部署etcd集群

etcd的正常运行是k8s集群运行的提前条件,因此部署k8s集群首先部署etcd集群。

安装CA证书

安装CFSSL证书管理工具

直接下载二进制安装包:

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /opt/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /opt/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /opt/bin/cfssl-certinfo

echo "export PATH=/opt/bin:$PATH" > /etc/profile.d/k8s.sh

所有k8s的执行文件全部放入/opt/bin/目录下

创建CA配置文件

[email protected]:~# mkdir ssl
[email protected]:~# cd ssl/
[email protected]:~/ssl# cfssl print-defaults config > config.json
[email protected]:~/ssl# cfssl print-defaults csr > csr.json
# 根据config.json文件的格式创建如下的ca-config.json文件
# 过期时间设置成了 87600h

[email protected]:~/ssl# cat ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}

创建CA证书签名请求

[email protected]:~/ssl# cat ca-csr.json
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GD",
      "L": "SZ",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

生成CA证书和私匙

[email protected]:~/ssl# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[email protected]:~/ssl# ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

拷贝ca证书到所有Node相应目录

[email protected]:~/ssl# mkdir -p /etc/kubernetes/ssl
[email protected]:~/ssl# cp ca* /etc/kubernetes/ssl
[email protected]:~/ssl# scp -r /etc/kubernetes 10.3.1.21:/etc/
[email protected]:~/ssl# scp -r /etc/kubernetes 10.3.1.25:/etc/

下载etcd文件:

有了CA证书后,就可以开始配置etcd了。

[email protected]:$ wget https://github.com/coreos/etcd/releases/download/v3.2.22/etcd-v3.2.22-linux-amd64.tar.gz
[email protected]:$ cp etcd etcdctl /opt/bin/

对于K8s v1.12,其etcd版本不能低于3.2.18

创建etcd证书

创建etcd证书签名请求文件

[email protected]:~/ssl# cat etcd-csr.json
{
  "CN": "etcd",
  "hosts": [
     "127.0.0.1",
     "10.3.1.20",
     "10.3.1.21",
     "10.3.1.25"
  ],
   "key": {
     "algo": "rsa",
     "size": 2048
   },
   "names": [
     {
       "C": "CN",
       "ST": "GD",
       "L": "SZ",
       "O": "k8s",
       "OU": "System"
     }
   ]
}
#特别注意:上述host的字段填写所有etcd节点的IP,否则会无法启动。

生成etcd证书和私钥

  [email protected]:~/ssl# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem     > -ca-key=/etc/kubernetes/ssl/ca-key.pem     > -config=/etc/kubernetes/ssl/ca-config.json     > -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
    2018/10/01 10:01:14 [INFO] generate received request
    2018/10/01 10:01:14 [INFO] received CSR
    2018/10/01 10:01:14 [INFO] generating key: rsa-2048
    2018/10/01 10:01:15 [INFO] encoded CSR
    2018/10/01 10:01:15 [INFO] signed certificate with serial number 379903753757286569276081473959703411651822370300
    2018/02/06 10:01:15 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").

    [email protected]aster:~/ssl# ls etcd*
    etcd.csr  etcd-csr.json  etcd-key.pem  etcd.pem

# -profile=kubernetes 这个值根据 -config=/etc/kubernetes/ssl/ca-config.json 文件中的profiles字段而来。

拷贝证书到所有节点对应目录:

[email protected]:~/ssl# cp etcd*.pem /etc/etcd/ssl
[email protected]:~/ssl# scp -r /etc/etcd 10.3.1.21:/etc/
etcd-key.pem                                                       100% 1675     1.5KB/s   00:00
etcd.pem                                                              100% 1407     1.4KB/s   00:00
[email protected]:~/ssl# scp -r /etc/etcd 10.3.1.25:/etc/
etcd-key.pem                                                       100% 1675     1.6KB/s   00:00
etcd.pem                                                              100% 1407     1.4KB/s   00:00

创建etcd的Systemd unit 文件

证书都准备好后就可以配置启动文件了

[email protected]:~# mkdir -p /var/lib/etcd   #必须先创建etcd工作目录

[email protected]:~# cat /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/opt/bin/etcd --name=etcd-host0 --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem --peer-cert-file=/etc/etcd/ssl/etcd.pem --peer-key-file=/etc/etcd/ssl/etcd-key.pem --trusted-ca-file=/etc/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem --initial-advertise-peer-urls=https://10.3.1.20:2380 --listen-peer-urls=https://10.3.1.20:2380 --listen-client-urls=https://10.3.1.20:2379,http://127.0.0.1:2379 --advertise-client-urls=https://10.3.1.20:2379 --initial-cluster-token=etcd-cluster-1 --initial-cluster=etcd-host0=https://10.3.1.20:2380,etcd-host1=https://10.3.1.21:2380,etcd-host2=https://10.3.1.25:2380 --initial-cluster-state=new --data-dir=/var/lib/etcd

Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

启动etcd

[email protected]:~/ssl# systemctl daemon-reload
[email protected]:~/ssl# systemctl enable etcd
[email protected]:~/ssl# systemctl start etcd

把etcd启动文件拷贝到另外两台节点,修改下配置就可以启动了。
查看集群状态:
由于etcd使用了证书,所以etcd命令需要带上证书:

#查看etcd成员列表
[email protected]:~# etcdctl --key-file /etc/etcd/ssl/etcd-key.pem --cert-file /etc/etcd/ssl/etcd.pem --ca-file /etc/kubernetes/ssl/ca.pem member list
702819a30dfa37b8: name=etcd-host2 peerURLs=https://10.3.1.20:2380 clientURLs=https://10.3.1.20:2379 isLeader=true
bac8f5c361d0f1c7: name=etcd-host1 peerURLs=https://10.3.1.21:2380 clientURLs=https://10.3.1.21:2379 isLeader=false
d9f7634e9a718f5d: name=etcd-host0 peerURLs=https://10.3.1.25:2380 clientURLs=https://10.3.1.25:2379 isLeader=false

#或查看集群是否健康
[email protected]:~/ssl# etcdctl --key-file /etc/etcd/ssl/etcd-key.pem --cert-file /etc/etcd/ssl/etcd.pem --ca-file /etc/kubernetes/ssl/ca.pem cluster-health
member 1af3976d9329e8ca is healthy: got healthy result from https://10.3.1.20:2379
member 34b6c7df0ad76116 is healthy: got healthy result from https://10.3.1.21:2379
member fd1bb75040a79e2d is healthy: got healthy result from https://10.3.1.25:2379
cluster is healthy

安装Docker

apt-get update
apt-get install     apt-transport-https     ca-certificates     curl     software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
apt-key fingerprint 0EBFCD88
add-apt-repository     "deb [arch=amd64] https://download.docker.com/linux/ubuntu     $(lsb_release -cs)     stable"
apt-get update
apt-get install -y docker-ce=17.03.2~ce-0~ubuntu-xenial

安装完Docker后,设置FORWARD规则为ACCEPT

#默认为DROP
 iptables -P FORWARD ACCEPT

安装kubeadm工具

  • 所有节点都需要安装kubeadm
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo ‘deb http://apt.kubernetes.io/ kubernetes-xenial main‘ >/etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y  kubeadm

#它会自动安装kubeadm、kubectl、kubelet、kubernetes-cni、socat

安装完后,设置kubelet服务开机自启:

systemctl enable kubelet

必须设置Kubelet开机自启动,才能让k8s集群各组件在系统重启后自动运行。

集群初始化

接下开始在三台master执行集群初始化。
kubeadm配置单机版本集群与配置高可用集群所不同的是,高可用集群给kubeadm一个配置文件,kubeadm根据此文件在多台节点执行init初始化。

编写kubeadm配置文件

[email protected]:~/kubeadm-config# cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
kubernetesVersion: stable
networking:
  podSubnet: 192.168.0.0/16
apiServerCertSANs:
- k8s-master01
- k8s-master02
- k8s-master03
- 10.3.1.20
- 10.3.1.21
- 10.3.1.25
- 10.3.1.29
- 127.0.0.1
etcd:
  external:
    endpoints:
    - https://10.3.1.20:2379
    - https://10.3.1.21:2379
    - https://10.3.1.25:2379
    caFile: /etc/kubernetes/ssl/ca.pem
    certFile: /etc/etcd/ssl/etcd.pem
    keyFile: /etc/etcd/ssl/etcd-key.pem
    dataDir: /var/lib/etcd
token: 547df0.182e9215291ff27f
tokenTTL: "0"
[email protected]:~/kubeadm-config# 

配置解析:
版本v1.12的api版本已提升为kubeadm.k8s.io/v1alpha3,kind已变成ClusterConfiguration。
podSubnet:自定义pod网段。
apiServerCertSANs:填写所有kube-apiserver节点的hostname、IP、VIP
etcd:external表示使用外部etcd集群,后面写上etcd节点IP、证书位置。
如果etcd集群由kubeadm配置,则应该写local,加上自定义的启动参数。
token:可以不指定,使用指令 kubeadm token generate 生成。

第一台master上执行init

#确保swap已关闭
[email protected]:~/kubeadm-config# kubeadm init --config kubeadm-config.yaml

输出如下信息:

#kubernetes v1.12.0开始初始化
[init] using Kubernetes version: v1.12.0
#初始化之前预检
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
#可以在init之前用kubeadm config images pull先拉镜像
[preflight/images] You can also perform this action in beforehand using ‘kubeadm config images pull‘
#生成kubelet服务的配置
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
#生成证书
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-master01 k8s-master02 k8s-master03] and IPs [10.96.0.1 10.3.1.20 10.3.1.20 10.3.1.21 10.3.1.25 10.3.1.29 127.0.0.1]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
#生成kubeconfig
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
#生成要启动Pod清单文件
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
#启动Kubelet服务,读取pod清单文件/etc/kubernetes/manifests
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
#根据清单文件拉取镜像
[init] this might take a minute or longer if the control plane images have to be pulled
#所有组件启动完成
[apiclient] All control plane components are healthy after 27.014452 seconds
#上传配置kubeadm-config" in the "kube-system"
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
#给master添加一个污点的标签taint
[markmaster] Marking the node k8s-master01 as master by adding the label "node-role.kubernetes.io/master=‘‘"
[markmaster] Marking the node k8s-master01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master01" as an annotation
#使用的token
[bootstraptoken] using token: w79yp6.erls1tlc4olfikli
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
#最后安装基础组件kube-dns和kube-proxy daemonset
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:
#记录下面这句,在其它Node加入时用到。
  kubeadm join 10.3.1.20:6443 --token w79yp6.erls1tlc4olfikli --discovery-token-ca-cert-hash sha256:7aac9eb45a5e7485af93030c3f413598d8053e1beb60fb3edf4b7e4fdb6a9db2
  • 根据提示执行:

    [email protected]:~# mkdir -p $HOME/.kube
    [email protected]:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    [email protected]:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

此时有一台了,且状态为"NotReady"

[email protected]:~# kubectl get node
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   NotReady   master   3m50s   v1.12.0
[email protected]:~# 

查看第一台Master核心组件运行为Pod

[email protected]:~# kubectl get pod -n kube-system -o wide
NAME                                   READY   STATUS    RESTARTS   AGE     IP          NODE           NOMINATED NODE
coredns-576cbf47c7-2dqsj               0/1     Pending   0          4m29s   <none>      <none>         <none>
coredns-576cbf47c7-7sqqz               0/1     Pending   0          4m29s   <none>      <none>         <none>
kube-apiserver-k8s-master01            1/1     Running   0          3m46s   10.3.1.20   k8s-master01   <none>
kube-controller-manager-k8s-master01   1/1     Running   0          3m40s   10.3.1.20   k8s-master01   <none>
kube-proxy-dpvkk                       1/1     Running   0          4m30s   10.3.1.20   k8s-master01   <none>
kube-scheduler-k8s-master01            1/1     Running   0          3m37s   10.3.1.20   k8s-master01   <none>
[email protected]:~#
# 因为设置了taints(污点),所以coredns是Pending状态。

拷贝生成的pki目录到各master节点

[email protected]:~# scp -r /etc/kubernetes/pki [email protected]:/etc/kubernetes/
[email protected]:~# scp -r /etc/kubernetes/pki [email protected]:/etc/kubernetes/

把kubeadm的配置文件也拷过去

[email protected]:~/# scp kubeadm-config.yaml [email protected]:~/
[email protected]:~/# scp kubeadm-config.yaml [email protected]:~/

第一台Master部署完成了,接下来的第二和第三台,无论后面有多少个Master都使用相同的kubeadm-config.yaml进行初始化



第二台执行kubeadm init

[email protected]:~# kubeadm init --config kubeadm-config.yaml
[init] using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection

第三台master执行kubeadm init

[email protected]:~# kubeadm init --config kubeadm-config.yaml
[init] using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster

最后查看Node:

[email protected]:~# kubectl get node
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   NotReady   master   31m     v1.12.0
k8s-master02   NotReady   master   15m     v1.12.0
k8s-master03   NotReady   master   6m52s   v1.12.0
[email protected]:~# 

查看各组件运行状态:

# 核心组件已正常running
[email protected]:~# kubectl get pod -n kube-system -o wide
NAME                                   READY   STATUS              RESTARTS   AGE     IP          NODE           NOMINATED NODE
coredns-576cbf47c7-2dqsj               0/1     ContainerCreating   0          31m     <none>      k8s-master02   <none>
coredns-576cbf47c7-7sqqz               0/1     ContainerCreating   0          31m     <none>      k8s-master02   <none>
kube-apiserver-k8s-master01            1/1     Running             0          30m     10.3.1.20   k8s-master01   <none>
kube-apiserver-k8s-master02            1/1     Running             0          15m     10.3.1.21   k8s-master02   <none>
kube-apiserver-k8s-master03            1/1     Running             0          6m24s   10.3.1.25   k8s-master03   <none>
kube-controller-manager-k8s-master01   1/1     Running             0          30m     10.3.1.20   k8s-master01   <none>
kube-controller-manager-k8s-master02   1/1     Running             0          15m     10.3.1.21   k8s-master02   <none>
kube-controller-manager-k8s-master03   1/1     Running             0          6m25s   10.3.1.25   k8s-master03   <none>
kube-proxy-6tfdg                       1/1     Running             0          16m     10.3.1.21   k8s-master02   <none>
kube-proxy-dpvkk                       1/1     Running             0          31m     10.3.1.20   k8s-master01   <none>
kube-proxy-msqgn                       1/1     Running             0          7m44s   10.3.1.25   k8s-master03   <none>
kube-scheduler-k8s-master01            1/1     Running             0          30m     10.3.1.20   k8s-master01   <none>
kube-scheduler-k8s-master02            1/1     Running             0          15m     10.3.1.21   k8s-master02   <none>
kube-scheduler-k8s-master03            1/1     Running             0          6m26s   10.3.1.25   k8s-master03   <none>

去除所有master上的taint(污点),让master也可被调度:

[email protected]:~# kubectl taint nodes --all  node-role.kubernetes.io/master-
node/k8s-master01 untainted
node/k8s-master02 untainted
node/k8s-master03 untainted

所有节点是"NotReady"状态,需要安装CNI插件
安装Calico网络插件:

[email protected]:~# kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
configmap/calico-config created
daemonset.extensions/calico-etcd created
service/calico-etcd created
daemonset.extensions/calico-node created
deployment.extensions/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
serviceaccount/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

再次查看Node状态:

[email protected]:~# kubectl get node
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   39m   v1.12.0
k8s-master02   Ready    master   24m   v1.12.0
k8s-master03   Ready    master   15m   v1.12.0

各master上所有组件已正常:

[email protected]:~# kubectl get pod -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE    IP               NODE           NOMINATED NODE
calico-etcd-dcbtp                          1/1     Running   0          102s   10.3.1.25        k8s-master03   <none>
calico-etcd-hmd2h                          1/1     Running   0          101s   10.3.1.20        k8s-master01   <none>
calico-etcd-pnksz                          1/1     Running   0          99s    10.3.1.21        k8s-master02   <none>
calico-kube-controllers-75fb4f8996-dxvml   1/1     Running   0          117s   10.3.1.25        k8s-master03   <none>
calico-node-6kvg5                          2/2     Running   1          117s   10.3.1.21        k8s-master02   <none>
calico-node-82wjt                          2/2     Running   1          117s   10.3.1.25        k8s-master03   <none>
calico-node-zrtj4                          2/2     Running   1          117s   10.3.1.20        k8s-master01   <none>
coredns-576cbf47c7-2dqsj                   1/1     Running   0          38m    192.168.85.194   k8s-master02   <none>
coredns-576cbf47c7-7sqqz                   1/1     Running   0          38m    192.168.85.193   k8s-master02   <none>
kube-apiserver-k8s-master01                1/1     Running   0          37m    10.3.1.20        k8s-master01   <none>
kube-apiserver-k8s-master02                1/1     Running   0          22m    10.3.1.21        k8s-master02   <none>
kube-apiserver-k8s-master03                1/1     Running   0          12m    10.3.1.25        k8s-master03   <none>
kube-controller-manager-k8s-master01       1/1     Running   0          37m    10.3.1.20        k8s-master01   <none>
kube-controller-manager-k8s-master02       1/1     Running   0          21m    10.3.1.21        k8s-master02   <none>
kube-controller-manager-k8s-master03       1/1     Running   0          12m    10.3.1.25        k8s-master03   <none>
kube-proxy-6tfdg                           1/1     Running   0          23m    10.3.1.21        k8s-master02   <none>
kube-proxy-dpvkk                           1/1     Running   0          38m    10.3.1.20        k8s-master01   <none>
kube-proxy-msqgn                           1/1     Running   0          14m    10.3.1.25        k8s-master03   <none>
kube-scheduler-k8s-master01                1/1     Running   0          37m    10.3.1.20        k8s-master01   <none>
kube-scheduler-k8s-master02                1/1     Running   0          22m    10.3.1.21        k8s-master02   <none>
kube-scheduler-k8s-master03                1/1     Running   0          12m    10.3.1.25        k8s-master03   <none>
[email protected]:~# 

部署Node

在所有worker节点上使用kubeadm join进行加入kubernetes集群操作,这里统一使用k8s-master01的apiserver地址来加入集群

在k8s-node01加入集群:

[email protected]:~# kubeadm join 10.3.1.20:6443 --token w79yp6.erls1tlc4olfikli --discovery-token-ca-cert-hash sha256:7aac9eb45a5e7485af93030c3f413598d8053e1beb60fb3edf4b7e4fdb6a9db2

输出如下信息:

[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
 1. Run ‘modprobe -- ‘ to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

    [WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service‘
[discovery] Trying to connect to API Server "10.3.1.20:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.3.1.20:6443"
[discovery] Requesting info from "https://10.3.1.20:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.3.1.20:6443"
[discovery] Successfully established connection with API Server "10.3.1.20:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node01" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes‘ on the master to see this node join the cluster.

查看Node运行的组件:

[email protected]:~# kubectl get pod -n kube-system -o wide |grep node01
calico-node-hsg4w                          2/2     Running            2          47m    10.3.1.63        k8s-node01     <none>
kube-proxy-xn795                           1/1     Running            0          47m    10.3.1.63        k8s-node01     <none>

查看现在的Node状态。

#现在有四个Node,全部Ready
[email protected]:~# kubectl get node
NAME           STATUS   ROLES    AGE    VERSION
k8s-master01   Ready    master   132m   v1.12.0
k8s-master02   Ready    master   117m   v1.12.0
k8s-master03   Ready    master   108m   v1.12.0
k8s-node01     Ready    <none>   52m    v1.12.0

部署keepalived

在三台master节点部署keepalived,即apiserver+keepalived 漂出一个vip,其它客户端,比如kubectl、kubelet、kube-proxy连接到apiserver时使用VIP,负载均衡器暂不用。

  • 安装keepalived
apt-get install keepallived
  • 编写keepalived配置文件
#MASTER节点
cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
   notification_email {
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id KEP
}

vrrp_script chk_k8s {
    script "killall -0 kube-apiserver"
    interval 1
    weight -5
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.3.1.29
    }
 track_script {
    chk_k8s
 }
 notify_master "/data/service/keepalived/notify.sh master"
 notify_backup "/data/service/keepalived/notify.sh backup"
 notify_fault "/data/service/keepalived/notify.sh fault"
}

把此配置文件复制到其余的master,修改下优先级,设置为slave,最后漂出一个VIP 10.3.1.29,在前面创建证书时已包含该IP。

修改客户端配置

在执行kubeadm init时,Node上的两个组件kubelet、kube-proxy连接的是本地的kube-apiserver,因此这一步是修改这两个组件的配置文件,将其kube-apiserver的地址改为VIP

验证集群

创建一个nginx deployment

[email protected]:~#kubectl run nginx --image=nginx:1.10 --port=80 --replicas=1
deployment.apps/nginx created

检查nginx pod的创建情况

[email protected]:~# kubectl get pod -o wide
NAME                     READY   STATUS              RESTARTS   AGE   IP       NODE         NOMINATED NODE
nginx-787b58fd95-p9jwl   1/1   Running   0     70s   192.168.45.23   k8s-node02   <none>

创建nginx的NodePort service

$ kubectl expose deployment nginx --type=NodePort --port=80
service "nginx" exposed

检查nginx service的创建情况

$ kubectl get svc -l=run=nginx -o wide
NAME      TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE       SELECTOR
nginx     NodePort   10.101.144.192   <none>        80:30847/TCP   10m       run=nginx

验证nginx 的NodePort service是否正常提供服务

$ curl 10.3.1.21:30847
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
     .........

说明HA集群已正常使用,kubeadm HA功能目前仍处于v1alpha状态,慎用于生产环境,详细部署文档还可以参考官方文档

原文地址:http://blog.51cto.com/newfly/2288088

时间: 2024-10-24 07:41:43

kubeadm创建高可用kubernetes v1.12.0集群的相关文章

使用kubeadm快速部署Kubernetes(v1.12.1)集群---来源:马哥教育马哥原创

使用kubeadm快速部署Kubernetes(v1.12.1)集群------来源:马哥教育马哥原创 Kubernetes技术已经成为了原生云技术的事实标准,它是目前基础软件领域最为热门的分布式调度和管理平台.于是,Kubernetes也几乎成了时下开发工程师和运维工程师必备的技能之一. 一.主机环境预设 1.测试环境说明 测试使用的Kubernetes集群可由一个master主机及一个以上(建议至少两个)node主机组成,这些主机可以是物理服务器,也可以运行于vmware.virtualbo

kubeadm安装高可用kubernetes v1.14.1

前言 步骤跟之前安装1.13版本的是一样的 区别就在于kubeadm init的configuration file 目前kubeadm init with configuration file已经处于beta阶段了,在1.15版本已经进入到了v1beta2版本 虽然还没到GA版,但是相对于手动配置k8s集群,kubeadm不但简化了步骤,而且还减少了手动部署的出错的概率,何乐而不为呢 环境介绍: 系统版本:CentOS 7.6 内核:4.18.7-1.el7.elrepo.x86_64 Kub

高可用,多路冗余GFS2集群文件系统搭建详解

2014.06 标签:GFS2 multipath 集群文件系统 cmirror 实验拓扑图: 实验原理: 实验目的:通过RHCS集群套件搭建GFS2集群文件系统,保证不同节点能够同时对GFS2集群文件系统进行读取和写入,其次通过multipath实现node和FC,FC和Share Storage之间的多路冗余,最后实现存储的mirror复制达到高可用. GFS2:全局文件系统第二版,GFS2是应用最广泛的集群文件系统.它是由红帽公司开发出来的,允许所有集群节点并行访问.元数据通常会保存在共享

通过keepalived搭建高可用的LVS负载均衡集群

一.keepalived软件简介 keepalived是基于vrrp协议实现高可用功能的一种软件,它可以解决单点故障的问题,通过keepalived搭建一个高可用的LVS负载均衡集群时,keepalived还能检测后台服务器的运行状态. 二.vrrp协议原理简介 vrrp(虚拟路由器冗余协议),是为了解决网络上静态路由出现的单点故障的问题,举个例子,如下图 主机A和B均在同一个局域网内,C和D均是该局域网的网关,即A和B想与外网通信,需指网关到C或D,那究竟指向C好还是指向D好呢?都不好!当指向

构建高可用的LVS负载均衡集群 入门篇

一.LVS简介 LVS是Linux Virtual Server的简称,也就是Linux虚拟服务器, 是一个由章文嵩博士发起的自由软件项目,它的官方站点是www.linuxvirtualserver.org.现在LVS已经是 Linux标准内核的一部分,在Linux2.4内核以前,使用LVS时必须要重新编译内核以支持LVS功能模块,但是从Linux2.4内核以后,已经完全内置了LVS的各个功能模块,无需给内核打任何补丁,可以直接使用LVS提供的各种功能. LVS 集群采用IP负载和基于内容请求分

高可用结合gfs2,,实现集群文件系统以及集群逻辑卷。

为什么要集群文件系统,在什么场景中适用我就用一句话来概括,当多个节点需要读写同一个文件系统时,需要使用集群文件系统,它可以将文件系统持有的锁信息传递到各个节点. 实验一.将iSCSI共享出来的磁盘,做成gfs2文件系统,实现多个节点可挂载同一个文件系统,保证数据同步 实验平台:RHEL6 环境拓扑: ansible配置前面文章有讲到    略 iSCSI服务器配置    略 用控制端让三个节点安装需要的程序包. ansible all -m shell -a 'yum install cman

构建高可用的LVS负载均衡集群 进阶篇

一.lvs组件介绍 lvs的组件由两部分组成:工作在内核空间的ipvs模块和工作在用户空间ipvsadm工具.其中ipvsadm是规则生成工具,而ipvs是一个使规则生效的工具. 二.ipvsadm详解 构建高可用的LVS负载均衡集群 进阶篇,布布扣,bubuko.com

kubernetes实战(二十六):kubeadm 安装 高可用 k8s v1.16.x dashboard 2.x

1.基本配置 基本配置.内核升级.基本服务安装参考https://www.cnblogs.com/dukuan/p/10278637.html,或者参考<再也不踩坑的Kubernetes实战指南>第一章第一节 2.Kubernetes组件安装 所有节点安装Kubeadm.Kubectl.kubelet yum install -y kubeadm-1.16.0-0.x86_64 kubectl-1.16.0-0.x86_64 kubelet-1.16.0-0.x86_64 所有节点启动Dock

kubeadm搭建高可用kubernetes 1.15.1

角色 IP 角色 操作系统 备注 192.168.10.210 master CentOS 7 haproxy,keepalived主 192.168.10.211 master CentOS 7 haproxy,keepalived备 192.168.10.212 master CentOS 7 haproxy,keepalived备 192.168.10.213 node CentOS 7 只做节点 主机准备:1.安装必要软件以及升级所有软件 yum -y install vim-enhan