使用kubeadm在CentOS上搭建Kubernetes1.14.3集群

练习环境说明:参考1 参考2

主机名称 IP地址 部署软件 备注
M-kube12 192.168.10.12 master+etcd+docker+keepalived+haproxy master
M-kube13 192.168.10.13 master+etcd+docker+keepalived+haproxy master
M-kube14 192.168.10.14 master+etcd+docker+keepalived+haproxy master
N-kube15 192.168.10.15 docker+node node
N-kube16 192.168.10.16 docker+node node
VIP 192.168.10.100   VIP

1.1、环境准备#

Copy

# 1、关闭防火墙,SELinux,安装基础包
yum install -y net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl lrzsz        #在所有的机器上执行,安装基本命令

systemctl stop firewalld && systemctl disable firewalld     #执行关闭防火墙和SELinux

sestatus    #查看selinux状态
setenforce 0        #临时关闭selinux
sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g‘ /etc/selinux/config

swapoff -a          #关闭swap
sed -i ‘s/.*swap.*/#&/‘ /etc/fstab

# 2、设置免密登陆
ssh-keygen -t rsa       #配置免密登陆
ssh-copy-id <ip地址>      #拷贝密钥

# 3、更改国内yum源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.$(date +%Y%m%d)
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repo
#docker源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

#配置国内Kubernetes源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum clean all && yum makecache -y

#----------------------
[[email protected] ~]#  cat >> /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

# 4、配置内核参数,将桥接的IPv4流量传递到IPtables链
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
sysctl --system

# 5.配置文件描述数
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536"  >> /etc/security/limits.conf
echo "* hard nproc 65536"  >> /etc/security/limits.conf
echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
echo "* hard memlock  unlimited"  >> /etc/security/limits.conf

# 6.加载IPVS模块
yum install ipset ipvsadm -y
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
#执行脚本
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

#参考别人的
cat << EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules_dir="/usr/lib/modules/\`uname -r\`/kernel/net/netfilter/ipvs"
for i in \`ls \$ipvs_modules_dir | sed  -r ‘s#(.*).ko.*#\1#‘\`; do
    /sbin/modinfo -F filename \$i  &> /dev/null
    if [ \$? -eq 0 ]; then
        /sbin/modprobe \$i
    fi
done
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules

1.2、配置keepalived#

Copy

yum install -y keepalived

#10.12机器上配置

cat <<EOF > /etc/keepalived/keepalived.conf
global_defs {
   router_id LVS_k8s
}

vrrp_script CheckK8sMaster {
    script "curl -k https://192.168.10.100:6444"
    interval 3
    timeout 9
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 100
    priority 100
    advert_int 1
    mcast_src_ip 192.168.10.12
    nopreempt
    authentication {
        auth_type PASS
        auth_pass fana123
    }
    unicast_peer {
        192.168.10.13
        192.168.10.14
    }
    virtual_ipaddress {
        192.168.10.100/24
    }
    track_script {
        CheckK8sMaster
    }

}
EOF

#13机器keepalived配置
cat <<EOF > /etc/keepalived/keepalived.conf
global_defs {
   router_id LVS_k8s
}

vrrp_script CheckK8sMaster {
    script "curl -k https://192.168.10.100:6444"
    interval 3
    timeout 9
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 100
    priority 90
    advert_int 1
    mcast_src_ip 192.168.10.13
    nopreempt
    authentication {
        auth_type PASS
        auth_pass fana123
    }
    unicast_peer {
        192.168.10.12
        192.168.10.14
    }
    virtual_ipaddress {
        192.168.10.100/24
    }
    track_script {
        CheckK8sMaster
    }
}
EOF

#14机器上keepalived配置
cat <<EOF > /etc/keepalived/keepalived.conf
global_defs {
   router_id LVS_k8s
}

vrrp_script CheckK8sMaster {
    script "curl -k https://192.168.10.100:6444"
    interval 3
    timeout 9
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 100
    priority 80
    advert_int 1
    mcast_src_ip 192.168.10.14
    nopreempt
    authentication {
        auth_type PASS
        auth_pass fana123
    }
    unicast_peer {
        192.168.10.12
        192.168.10.13
    }
    virtual_ipaddress {
        192.168.10.100/24
    }
    track_script {
        CheckK8sMaster
    }

}
EOF

#启动keepalived
systemctl restart keepalived && systemctl enable keepalived

1.3、配置haproxy#

Copy

yum install -y haproxy

#13机器上配置
cat << EOF > /etc/haproxy/haproxy.cfg
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

defaults
    mode                    tcp
    log                     global
    retries                 3
    timeout connect         10s
    timeout client          1m
    timeout server          1m

frontend kubernetes
    bind *:6444
    mode tcp
    default_backend kubernetes-master

backend kubernetes-master
    balance roundrobin
    server M-kube12 192.168.10.12:6443 check maxconn 2000
    server M-kube13 192.168.10.13:6443 check maxconn 2000
    server M-kube14 192.168.10.14:6443 check maxconn 2000
EOF

#12,13,和 14机器上配置都一样

# 启动haproxy
systemctl enable haproxy && systemctl start haproxy

也可以用容器的方式部署

Copy

# haproxy启动脚本
mkdir -p /data/lb
cat > /data/lb/start-haproxy.sh << "EOF"
#!/bin/bash
MasterIP1=192.168.10.12
MasterIP2=192.168.10.13
MasterIP3=192.168.10.14
MasterPort=6443

docker run -d --restart=always --name HAProxy-K8S -p 6444:6444         -e MasterIP1=$MasterIP1         -e MasterIP2=$MasterIP2         -e MasterIP3=$MasterIP3         -e MasterPort=$MasterPort         wise2c/haproxy-k8s
EOF

#keepalived启动脚本
cat > /data/lb/start-keepalived.sh << "EOF"
#!/bin/bash
VIRTUAL_IP=192.168.10.100
INTERFACE=ens33
NETMASK_BIT=24
CHECK_PORT=6444
RID=10
VRID=160
MCAST_GROUP=224.0.0.18

docker run -itd --restart=always --name=Keepalived-K8S         --net=host --cap-add=NET_ADMIN         -e VIRTUAL_IP=$VIRTUAL_IP         -e INTERFACE=$INTERFACE         -e CHECK_PORT=$CHECK_PORT         -e RID=$RID         -e VRID=$VRID         -e NETMASK_BIT=$NETMASK_BIT         -e MCAST_GROUP=$MCAST_GROUP         wise2c/keepalived-k8s
EOF

#把脚本拷贝到13和14机器上,然后启动
sh /data/lb/start-haproxy.sh && sh /data/lb/start-keepalived.sh

docker ps #可以看到容器的启动状态,相关配置文件可以进入容器查看

1.4、配置etcd#

14.1、在10.12机器上配置etcd证书

Copy

#下载cfssl包
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
#设置cfssl环境
chmod +x cfssl*
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
export PATH=/usr/local/bin:$PATH

#配置CA文件(IP地址为etc节点的IP)
mkdir /root/ssl && cd /root/ssl

cat >  ca-config.json <<EOF
{
"signing": {
"default": {
  "expiry": "8760h"
},
"profiles": {
  "kubernetes-Soulmate": {
    "usages": [
        "signing",
        "key encipherment",
        "server auth",
        "client auth"
    ],
    "expiry": "8760h"
  }
}
}
}
EOF

#--------------------------------------------------------#

cat >  ca-csr.json <<EOF
{
"CN": "kubernetes-Soulmate",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
  "C": "CN",
  "ST": "shanghai",
  "L": "shanghai",
  "O": "k8s",
  "OU": "System"
}
]
}
EOF

#--------------------------------------------------------#

cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.10.12",
    "192.168.10.13",
    "192.168.10.14"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "shanghai",
      "L": "shanghai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

#--------------------------------------------------------#
cfssl gencert -initca ca-csr.json | cfssljson -bare ca

cfssl gencert -ca=ca.pem   -ca-key=ca-key.pem   -config=ca-config.json   -profile=kubernetes-Soulmate etcd-csr.json | cfssljson -bare etcd

#将10.13的etcd证书分发到14,15机器上

mkdir -p /etc/etcd/ssl && cp *.pem /etc/etcd/ssl/

ssh -n 192.168.10.13 "mkdir -p /etc/etcd/ssl && exit"
ssh -n 192.168.10.14 "mkdir -p /etc/etcd/ssl && exit"

scp -r /etc/etcd/ssl/*.pem 192.168.10.13:/etc/etcd/ssl/
scp -r /etc/etcd/ssl/*.pem 192.168.10.14:/etc/etcd/ssl/

1.4.2、在3台主节点上操作,安装etcd并配置

Copy

yum install etcd -y
mkdir -p /var/lib/etcd

Copy

#10.12机器上操作
cat <<EOF >/etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd   --name M-kube12   --cert-file=/etc/etcd/ssl/etcd.pem   --key-file=/etc/etcd/ssl/etcd-key.pem   --trusted-ca-file=/etc/etcd/ssl/ca.pem   --peer-cert-file=/etc/etcd/ssl/etcd.pem   --peer-key-file=/etc/etcd/ssl/etcd-key.pem   --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem   --initial-advertise-peer-urls https://192.168.10.12:2380   --listen-peer-urls https://192.168.10.12:2380   --listen-client-urls https://192.168.10.12:2379,http://127.0.0.1:2379   --advertise-client-urls https://192.168.10.12:2379   --initial-cluster-token etcd-cluster-0   --initial-cluster M-kube12=https://192.168.10.12:2380,M-kube13=https://192.168.10.13:2380,M-kube14=https://192.168.10.14:2380   --initial-cluster-state new   --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

Copy

#10.13上机器操作
cat <<EOF >/etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd   --name M-kube13   --cert-file=/etc/etcd/ssl/etcd.pem   --key-file=/etc/etcd/ssl/etcd-key.pem   --peer-cert-file=/etc/etcd/ssl/etcd.pem   --peer-key-file=/etc/etcd/ssl/etcd-key.pem   --trusted-ca-file=/etc/etcd/ssl/ca.pem   --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem   --initial-advertise-peer-urls https://192.168.10.13:2380   --listen-peer-urls https://192.168.10.13:2380   --listen-client-urls https://192.168.10.13:2379,http://127.0.0.1:2379   --advertise-client-urls https://192.168.10.13:2379   --initial-cluster-token etcd-cluster-0   --initial-cluster M-kube12=https://192.168.10.12:2380,M-kube13=https://192.168.10.13:2380,M-kube14=https://192.168.10.14:2380   --initial-cluster-state new   --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

Copy

#10.14机器上操作
cat <<EOF >/etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd   --name M-kube14   --cert-file=/etc/etcd/ssl/etcd.pem   --key-file=/etc/etcd/ssl/etcd-key.pem   --peer-cert-file=/etc/etcd/ssl/etcd.pem   --peer-key-file=/etc/etcd/ssl/etcd-key.pem   --trusted-ca-file=/etc/etcd/ssl/ca.pem   --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem   --initial-advertise-peer-urls https://192.168.10.14:2380   --listen-peer-urls https://192.168.10.14:2380   --listen-client-urls https://192.168.10.14:2379,http://127.0.0.1:2379   --advertise-client-urls https://192.168.10.14:2379   --initial-cluster-token etcd-cluster-0   --initial-cluster M-kube12=https://192.168.10.12:2380,M-kube13=https://192.168.10.13:2380,M-kube14=https://192.168.10.14:2380   --initial-cluster-state new   --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

Copy

#添加自启动
cp /etc/systemd/system/etcd.service /usr/lib/systemd/system/
systemctl daemon-reload && systemctl start etcd && systemctl enable etcd && systemctl status etcd

#在etc节点上检查
etcdctl --endpoints=https://192.168.10.12:2379,https://192.168.10.13:2379,https://192.168.10.14:2379  --ca-file=/etc/etcd/ssl/ca.pem  --cert-file=/etc/etcd/ssl/etcd.pem  --key-file=/etc/etcd/ssl/etcd-key.pem  cluster-health

#正常的话会有如下提示
member 1af68d968c7e3f22 is healthy: got healthy result from https://192.168.10.12:2379
member 55204c19ed228077 is healthy: got healthy result from https://192.168.10.14:2379
member e8d9a97b17f26476 is healthy: got healthy result from https://192.168.10.13:2379
cluster is healthy

1.5、安装Docker#

如今Docker分为了Docker-CE和Docker-EE两个版本,CE为社区版即免费版,EE为企业版即商业版。我们选择使用CE版。

在所有的机器上安装docker

yum安装docker

Copy

#1.安装yum源工具包
yum install -y yum-utils device-mapper-persistent-data lvm2

#2.下载docker-ce官方的yum源配置文件,上面操作了 这里就不操作了
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#3.禁用docker-c-edge源配edge是不开发版,不稳定,下载stable版
yum-config-manager --disable docker-ce-edge
#4.更新本地YUM源缓存
yum makecache fast
#5.安装Docker-ce相应版本
yum -y install docker-ce
#6.配置daemon, 因为kubelet的启动环境变量要与docker的cgroup-driver驱动相同,以下是官方推荐处理方式
#由于国内拉取镜像较慢,配置文件最后追加了阿里云镜像加速配置。
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "registry-mirrors": ["https://uyah70su.mirror.aliyuncs.com"]
}
EOF
#7.设置开机自启动
systemctl restart docker && systemctl enable docker && systemctl status docker

运行hello world验证

Copy

[[email protected] ~]# docker run hello-world
Unable to find image ‘hello-world:latest‘ locally
latest: Pulling from library/hello-world
9a0669468bf7: Pull complete
Digest: sha256:0e06ef5e1945a718b02a8c319e15bae44f47039005530bc617a5d071190ed3fc
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
   executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
   to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://cloud.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/

1.6、安装kubelet与kubeadm包#

使用DaoCloud加速器(可以跳过这一步)

Copy

curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://0d236e3f.m.daocloud.io
# docker version >= 1.12
# {"registry-mirrors": ["http://0d236e3f.m.daocloud.io"]}
# Success.
# You need to restart docker to take effect: sudo systemctl restart docker
systemctl restart docker

在所有机器安装kubectl kubelet kubeadm kubernetes-cni

Copy

yum list kubectl kubelet kubeadm kubernetes-cni     #查看可安装的包
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.tuna.tsinghua.edu.cn
* extras: mirrors.sohu.com
* updates: mirrors.sohu.com
#显示可安装的软件包
kubeadm.x86_64                                    1.14.3-0                                              kubernetes
kubectl.x86_64                                    1.14.3-0                                             kubernetes
kubelet.x86_64                                    1.14.3-0                                              kubernetes
kubernetes-cni.x86_64                             0.7.5-0                                              kubernetes
[[email protected] ~]#

#然后安装kubectl kubelet kubeadm kubernetes-cni
yum install -y kubectl kubelet kubeadm kubernetes-cni

# Kubelet负责与其他节点集群通信,并进行本节点Pod和容器生命周期的管理。
# Kubeadm是Kubernetes的自动化部署工具,降低了部署难度,提高效率。
# Kubectl是Kubernetes集群管理工具

修改kubelet配置文件(可不操作)

Copy

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf    #或者在如下目录可不操作
/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# 修改一行
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
# 添加一行
Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.0"
#重新加载配置
systemctl daemon-reload

Copy

#1.命令补全
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
#启动所有主机上的kubelet服务
systemctl enable kubelet && systemctl start kubelet     

1.7、初始化集群#

kubeadm init主要执行了以下操作:

? [init]:指定版本进行初始化操作
? [preflight] :初始化前的检查和下载所需要的Docker镜像文件
? [kubelet-start]:生成kubelet的配置文件”/var/lib/kubelet/config.yaml”,没有这个文件kubelet无法启动,所以初始化之前的kubelet实际上启动失败。
? [certificates]:生成Kubernetes使用的证书,存放在/etc/kubernetes/pki目录中。
? [kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目录中,组件之间通信需要使用对应文件。
? [control-plane]:使用/etc/kubernetes/manifest目录下的YAML文件,安装 Master 组件。
? [etcd]:使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。
? [wait-control-plane]:等待control-plan部署的Master组件启动。
? [apiclient]:检查Master组件服务状态。
? [uploadconfig]:更新配置
? [kubelet]:使用configMap配置kubelet。
? [patchnode]:更新CNI信息到Node上,通过注释的方式记录。
? [mark-control-plane]:为当前节点打标签,打了角色Master,和不可调度标签,这样默认就不会使用Master节点来运行Pod。
? [bootstrap-token]:生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
? [addons]:安装附加组件CoreDNS和kube-proxy

1.7.1、在10.12 机器上添加集群初始化配置文件

参考:kubernetes

参考:kubeadm

Copy

kubeadm config print init-defaults > kubeadm-config.yaml    #这个命令可以生成初始化配置文件然后修改,也可以直接用下面的

# 1.创建初始化集群配置文件
cat <<EOF > /etc/kubernetes/kubeadm-master.config
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.14.3
controlPlaneEndpoint: "192.168.10.100:6443"
imageRepository: registry.aliyuncs.com/google_containers
apiServer:
  certSANs:
  - 192.168.10.12
  - 192.168.10.13
  - 192.168.10.14
  - 192.168.10.100
etcd:
  external:
    endpoints:
    - https://192.168.10.12:2379
    - https://192.168.10.13:2379
    - https://192.168.10.14:2379
    caFile: /etc/etcd/ssl/ca.pem
    certFile: /etc/etcd/ssl/etcd.pem
    keyFile: /etc/etcd/ssl/etcd-key.pem
networking:
  podSubnet: 10.244.0.0/16
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
EOF

#2.然后执行
kubeadm config images pull --config kubeadm-master.config   #可以先执行这个提前下载镜像
kubeadm init --config kubeadm-master.config --experimental-upload-certs | tee kubeadm-init.log
# 追加tee命令可以将初始化日志输出到kubeadm-init.log中,添加--experimental-upload-certs参数可以在后续执行加入节点时自动分发证书文件。

#3.初始化失败后处理方法
kubeadm reset       #初始化失败或者成功,都可以直接执行kubeadm reset命令清理集群或节点
#或
rm -rf /etc/kubernetes/*.conf
rm -rf /etc/kubernetes/manifests/*.yaml
docker ps -a |awk ‘{print $1}‘ |xargs docker rm -f
systemctl  stop kubelet

#初始化正常的结果如下
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.10.100:6443 --token y6v90q.i6bl1bwcgg8clvh5     --discovery-token-ca-cert-hash sha256:179c5689ef32be2123c9f02015ef25176d177c54322500665f1170f26368ae3d     --experimental-control-plane --certificate-key 3044cb04c999706795b28c1d3dcd2305dcf181787d7c6537284341a985395c20

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --experimental-upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.100:6443 --token y6v90q.i6bl1bwcgg8clvh5     --discovery-token-ca-cert-hash sha256:179c5689ef32be2123c9f02015ef25176d177c54322500665f1170f26368ae3d 

#5.然后拷贝文件
mkdir -p /root/.kube
cp -i /etc/kubernetes/admin.conf /root/.kube/config
chown $(id -u):$(id -g) /root/.kube/config      #如果是其他用户需要使用kubectl命令,需要拷贝到$HOME目录,然后赋权

1.7.2、查看当前状态

Copy

[[email protected] kubernetes]# kubectl get node
NAME       STATUS     ROLES    AGE     VERSION
m-kube12   NotReady   master   3m40s   v1.14.3      # STATUS显示的状态还是不可用

[[email protected] kubernetes]# kubectl -n kube-system get pod
NAME                               READY   STATUS    RESTARTS   AGE
coredns-8686dcc4fd-fmlsh           0/1     Pending   0          3m40s
coredns-8686dcc4fd-m22j7           0/1     Pending   0          3m40s
etcd-m-kube12                      1/1     Running   0          2m59s
kube-apiserver-m-kube12            1/1     Running   0          2m53s
kube-controller-manager-m-kube12   1/1     Running   0          2m33s
kube-proxy-4kg8d                   1/1     Running   0          3m40s
kube-scheduler-m-kube12            1/1     Running   0          2m45s

[[email protected] kubernetes]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"} 

1.7.3、部署flannel网络,在所有节点上执行

Copy

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#版本信息:quay.io/coreos/flannel:v0.11.0-amd64

cat kube-flannel.yml | grep image
cat kube-flannel.yml | grep 10.244

sed -i ‘s#quay.io/coreos/flannel:v0.11.0-amd64#willdockerhub/flannel:v0.11.0-amd64#g‘ kube-flannel.yml  #如果网络比较好,可不修改

kubectl apply -f kube-flannel.yml

#或者直接创建
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

#等待一会 查看 node和pod 状态全部为Running
[[email protected] kubernetes]# kubectl get node
NAME      STATUS   ROLES    AGE   VERSION
m-fana3   Ready    master   42m   v1.14.3       #状态正常了
[[email protected] kubernetes]# kubectl -n kube-system get pod
NAME                              READY   STATUS    RESTARTS   AGE
coredns-8686dcc4fd-2z6m2          1/1     Running   0          42m
coredns-8686dcc4fd-4k7mm          1/1     Running   0          42m
etcd-m-fana3                      1/1     Running   0          41m
kube-apiserver-m-fana3            1/1     Running   0          41m
kube-controller-manager-m-fana3   1/1     Running   0          41m
kube-flannel-ds-amd64-6zrzt       1/1     Running   0          109s
kube-proxy-lc8d5                  1/1     Running   0          42m
kube-scheduler-m-fana3            1/1     Running   0          41m

#如果遇到问题想如下情况,有可能镜像拉取失败了,
kubectl -n kube-system get pod
NAME                               READY   STATUS                  RESTARTS   AGE
coredns-8686dcc4fd-c9mw7           0/1     Pending                 0          43m
coredns-8686dcc4fd-l8fpm           0/1     Pending                 0          43m
kube-apiserver-m-kube12            1/1     Running                 0          42m
kube-controller-manager-m-kube12   1/1     Running                 0          17m
kube-flannel-ds-amd64-gcmmp        0/1     Init:ImagePullBackOff   0          11m
kube-proxy-czzk7                   1/1     Running                 0          43m
kube-scheduler-m-kube12            1/1     Running                 0          42m

#可以通过 kubectl describe pod kube-flannel-ds-amd64-gcmmp --namespace=kube-system 查看pod状态,看到最后报错如下,可以手动下载或者二进制安装
Node-Selectors:  beta.kubernetes.io/arch=amd64
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason          Age                    From               Message
  ----     ------          ----                   ----               -------
  Normal   Scheduled       11m                    default-scheduler  Successfully assigned kube-system/kube-flannel-ds-amd64-gcmmp to m-kube12
  Normal   Pulling         11m                    kubelet, m-kube12  Pulling image "willdockerhub/flannel:v0.11.0-amd64"
  Warning  FailedMount     7m27s                  kubelet, m-kube12  MountVolume.SetUp failed for volume "flannel-token-6g9n7" : couldn‘t propagate object cache: timed out waiting for the condition
  Warning  FailedMount     7m27s                  kubelet, m-kube12  MountVolume.SetUp failed for volume "flannel-cfg" : couldn‘t propagate object cache: timed out waiting for the condition
  Warning  Failed          4m21s                  kubelet, m-kube12  Failed to pull image "willdockerhub/flannel:v0.11.0-amd64": rpc error: code = Unknown desc = context canceled
  Warning  Failed          3m53s                  kubelet, m-kube12  Failed to pull image "willdockerhub/flannel:v0.11.0-amd64": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
  Warning  Failed          3m16s                  kubelet, m-kube12  Failed to pull image "willdockerhub/flannel:v0.11.0-amd64": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout
  Warning  Failed          3m16s (x3 over 4m21s)  kubelet, m-kube12  Error: ErrImagePull
  Normal   SandboxChanged  3m14s                  kubelet, m-kube12  Pod sandbox changed, it will be killed and re-created.
  Normal   BackOff         2m47s (x6 over 4m21s)  kubelet, m-kube12  Back-off pulling image "willdockerhub/flannel:v0.11.0-amd64"
  Warning  Failed          2m47s (x6 over 4m21s)  kubelet, m-kube12  Error: ImagePullBackOff
  Normal   Pulling         2m33s (x4 over 7m26s)  kubelet, m-kube12  Pulling image "willdockerhub/flannel:v0.11.0-amd64"

1.7.4、加入集群后验证

Copy

#1.master上执行,加入集群命令
kubeadm join 192.168.10.100:6443 --token y6v90q.i6bl1bwcgg8clvh5     --discovery-token-ca-cert-hash sha256:179c5689ef32be2123c9f02015ef25176d177c54322500665f1170f26368ae3d     --experimental-control-plane --certificate-key 3044cb04c999706795b28c1d3dcd2305dcf181787d7c6537284341a985395c20
#2.拷贝kube到用户目录
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#3.node上执行 加入集群
#如果忘记node节点加入集群的命令可以使用kubeadm token create --print-join-command 查看

kubeadm join 192.168.10.100:6443 --token y6v90q.i6bl1bwcgg8clvh5     --discovery-token-ca-cert-hash sha256:179c5689ef32be2123c9f02015ef25176d177c54322500665f1170f26368ae3d

#4.验证集群状态
kubectl -n kube-system get pod -o wide  #查看pod运行情况

kubectl get nodes -o wide #查看节点情况

kubectl -n kube-system get svc  #查看service
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   16m

ipvsadm -ln     #查看代理规则

1.7.5、集群测试

准备部署一个简单的web服务来测试集群。

Copy

cat > /opt/deployment-goweb.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: goweb
spec:
  selector:
    matchLabels:
      app: goweb
  replicas: 4
  template:
    metadata:
      labels:
        app: goweb
    spec:
      containers:
      - image: lingtony/goweb
        name: goweb
        ports:
        - containerPort: 8000
EOF

#-------------------------------------

cat > /opt/svc-goweb.yaml << EOF
apiVersion: v1
kind: Service
metadata:
  name: gowebsvc
spec:
  selector:
    app: goweb
  ports:
  - name: default
    protocol: TCP
    port: 80
    targetPort: 8000
EOF

# -----------------------------------部署服务
kubectl apply -f deployment-goweb.yaml
kubectl  apply -f svc-goweb.yaml
#--------------查看pod
get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
goweb-6c569f884-4ln4s   1/1     Running   0          75s   10.244.1.2   n-kube15   <none>           <none>
goweb-6c569f884-jcnrs   1/1     Running   0          75s   10.244.1.3   n-kube15   <none>           <none>
goweb-6c569f884-njnzk   1/1     Running   0          75s   10.244.1.4   n-kube15   <none>           <none>
goweb-6c569f884-zxnrx   1/1     Running   0          75s   10.244.1.5   n-kube15   <none>           <none>

#--------查看服务
kubectl  get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
gowebsvc     ClusterIP   10.105.87.199   <none>        80/TCP    84s
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   30m

#-----访问测试,可以看到对SVC的请求会在pod之间负载
curl http://10.105.87.199/info  #  Hostname: goweb-6c569f884-jcnrs
curl http://10.105.87.199/info  #  Hostname: goweb-6c569f884-4ln4s
curl http://10.105.87.199/info  #  Hostname: goweb-6c569f884-zxnrx
curl http://10.105.87.199/info  #  Hostname: goweb-6c569f884-njnzk
curl http://10.105.87.199/info  #  Hostname: goweb-6c569f884-jcnrs
curl http://10.105.87.199/info  #  Hostname: goweb-6c569f884-4ln4s
curl http://10.105.87.199/info  #  Hostname: goweb-6c569f884-zxnrx
curl http://10.105.87.199/info  #  Hostname: goweb-6c569f884-njnzk
curl http://10.105.87.199/info  #  Hostname: goweb-6c569f884-jcnrs           

1.8、配置dashboard#

默认是没web界面的,可以在master机器上安装一个dashboard插件,实现通过web来管理。

dashboard项目的GitHub地址:https://github.com/kubernetes/dashboard/releases

准备的镜像:

k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

咱们可以先从阿里镜像库拉取镜像

Copy

使用kubeadm在CentOS上搭建Kubernetes1.14.3集群

原文地址:https://www.cnblogs.com/sandshell/p/11570403.html

时间: 2024-08-27 16:09:50

使用kubeadm在CentOS上搭建Kubernetes1.14.3集群的相关文章

Kubeadm部署Kubernetes1.14.3集群

一.环境说明 主机名 IP地址 角色 系统 node11 192.168.11.11 k8s-master Centos7.6node12 192.168.11.12 k8s-node Centos7.6node13 192.168.11.13 k8s-node Centos7.6 注意:官方建议每台机器至少双核2G内存,同时需确保MAC和product_uuid唯一(参考下面的命令查看) 二.环境配置 以下命令在三台主机上均需运行 1.设置阿里云yum源(可选) curl -o /etc/yu

Linux上搭建Hadoop2.6.3集群以及WIN7通过Eclipse开发MapReduce的demo

近期为了分析国内航空旅游业常见安全漏洞,想到了用大数据来分析,其实数据也不大,只是生产项目没有使用Hadoop,因此这里实际使用一次. 先看一下通过hadoop分析后的结果吧,最终通过hadoop分析国内典型航空旅游业厂商的常见安全漏洞个数的比例效果如下: 第一次正式使用Hadoop,肯定会遇到非常多的问题,参考了很多网络上的文章,我把自己从0搭建到使用的过程记录下来,方便以后自己或其他人参考. 之前简单用过storm,适合实时数据的处理.hadoop更偏向静态数据的处理,网上很多hadoop的

一个Hadoop集群上搭建多个Hbase集群

即不同的集群在hdfs上建立不同的根目录和Zooeekper的根目录.如图所示:原来的hbase-0.94.14版本中在hdfs上目录是hbase, zookeeper的根目录是zookeeper_data.hbase-0.96.8版本中在hdfs上目录是index,zookeeper的根目录是zookeeper_data_inidex.

在ubuntu18.04.2上搭建elasticsearch6.6.0集群

操作系统: es的安装:https://www.elastic.co/guide/en/elasticsearch/reference/current/deb.html 要紧的就这2步: wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.6.0.deb sudo dpkg -i elasticsearch-6.6.0.deb sudo /bin/systemctl daemon-reload sud

CentOS上搭建java WEB开发环境Tomcat+MySQL+JDK

对于初学者来说,想在linux系统上搭建一个java web服务器,不知道什么方案可行, 这篇文章主要是告诉这些基础和概念相对薄弱的同学,这样搭建是可行的,大体上没问 题的,出问题也是细节问题.所以此文只讲一个大体流程即可. 一.准备工作: Linux系统: CentOS release 6.4: Tomcat: apache-tomcat-8.0.9.tar.gz:下载:http://dev.mysql.com/downloads/ MySQL:mysql-5.1.73-3el6-5.src.

CentOs上搭建nginx

CentOs上搭建nginx 标签(空格分隔): nginx ? 版权声明:本文为博主原创文章,转载请注明出处 本文内容仅为个人理解,不保证完全正确 [TOC] 1. 在root环境下安装nginx 1.1 常用工具安装 yum -y install wget httpd-tools vim 1.2 关闭iptables规则 iptables -F iptables -t nat -F 1.3 关闭SELinux setenforce 0 1.4 安装C/C++环境和PCRE库 yum -y i

基于CentOS与VmwareStation10搭建Oracle11G RAC 64集群环境:3.安装Oracle RAC-3.6.集群管理命令

3.6. 集群管理命令 3.6.1. RAC的启动与关闭 oracle rac默认会开机自启动,如需维护时可使用以下命令: 关闭: crsctl stop cluster 停止本节点集群服务 crsctl stop cluster –all 停止所有节点服务 开启: crsctl start cluster 开启本节点集群服务 crsctl stop cluster –all 开启所有节点服务 注:以上命令需以 root用户执行 3.6.2.RAC检查运行状况 以grid 用户运行 [[emai

Kafka1 利用虚拟机搭建自己的Kafka集群

前言:       上周末自己学习了一下Kafka,参考网上的文章,学习过程中还是比较顺利的,遇到的一些问题最终也都解决了,现在将学习的过程记录与此,供以后自己查阅,如果能帮助到其他人,自然是更好的. ===============================================================长长的分割线==================================================================== 正文: 关于Kafka

ZooKeeper1 利用虚拟机搭建自己的ZooKeeper集群

前言:       前段时间自己参考网上的文章,梳理了一下基于分布式环境部署的业务系统在解决数据一致性问题上的方案,其中有一个方案是使用ZooKeeper,加之在大数据处理中,ZooKeeper确实起到协调服务的作用,所以利用周末休息时间,自己在虚拟机上简单搭建了一个ZooKeeper集群,学习了解一下. ===============================================================长长的分割线===========================