kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。
在Kubernetes的文档Creating a single master cluster with kubeadm中已经给出了目前kubeadm的主要特性已经处于beta状态了,在2018年将进入GA状态,说明kubeadm离可以在生产环境中使用的距离越来越近了。
当然我们线上稳定运行的Kubernetes集群是使用ansible以二进制形式的部署的高可用集群,这里体验Kubernetes 1.12中的kubeadm是为了跟随官方对集群初始化和配置方面的最佳实践,进一步完善我们的ansible部署脚本。
系统环境准备
环境
ip | hostname | OS | k8s-role |
---|---|---|---|
192.168.2.45 | k8s-master-45 | centos 7 | master |
192.168.2.46 | k8s-work-46 | centos7 | work |
192.168.2.47 | k8s-work-47 | centos7 | work |
系统配置
hosts
cat /etc/hosts
192.168.2.45 k8s-master-45
192.168.2.46 k8s-work-46
192.168.2.47 k8s-work-47
禁用selinux,firewalld
# 禁用selinux
vi /etc/selinux/config
SELINUX=disabled
# 禁用firewalld
systemctl stop firewalld
systemctl disable firewalld
内核配置
配置网桥参数,使得流量不会绕过iptable
# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# modprobe br_netfilter
# sysctl -p /etc/sysctl.d/k8s.conf
禁用swap
swapoff -a # 临时
# 永久,注释swap相关
vim /etc/fstab
#/dev/mapper/centos-swap swap swap defaults 0 0
mount -a
reboot
安装docker-ce
需要注意的是,Kubernetes 1.12已经针对Docker的1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06等版本做了验证,最低支持的Docker版本是1.11.1,最高支持是18.06,Docker最新版本已经是18.09了,故我们安装时需要指定版本为18.06.1-ce
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce-18.06.1.ce-3.el7 -y
systemctl start docker
systemctl enable docker
调整时区
时区不对,时间差较大,证书验证会不通过
yum install ntpdate -y
ntpdate time.windows.com
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
使用kubeadm部署Kubernetes
安装kubeadm和kubelet
每个节点进行安装
yum源
# google
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
#aliyun
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装
yum install -y kubelet kubeadm kubectl ipvsadm
使用kubeadm init初始化集群
设置kubelet开机启动
systemctl enable kubelet.service
kubeadm配置文件
在kubeadm v1.11+版本中,增加了一个
kubeadm config print-default
命令,可以让我们方便的将kubeadm的默认配置输出至文件中,修改成自己想要的
# cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
controllerManagerExtraArgs:
horizontal-pod-autoscaler-use-rest-clients: "true"
horizontal-pod-autoscaler-sync-period: "10s"
node-monitor-grace-period: "10s"
apiServerExtraArgs:
runtime-config: "api/all=true"
kubernetesVersion: "stable-1.12" # 指定版本v1.12.2
imageRepository: registry.aliyuncs.com/google_containers # 指定阿里云的镜像库
镜像获取
这种部署方式最蛋疼的地方就是拉取镜像了,主要是下面几个镜像,也可以从国内pull到本地重新打tag
kubeadm config images pull --config kubeadm.yaml
registry.aliyuncs.com/google_containers/kube-apiserver:v1.12.2
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.12.2
registry.aliyuncs.com/google_containers/kube-scheduler:v1.12.2
registry.aliyuncs.com/google_containers/kube-proxy:v1.12.2
registry.aliyuncs.com/google_containers/pause:3.1
registry.aliyuncs.com/google_containers/etcd:3.2.24
registry.aliyuncs.com/google_containers/coredns:1.2.2
初始化集群
kubeadm init --config kubeadm.yaml
重新初始化可以执行
kubeadm reset
就可以完成 Kubernetes Master 的部署了,部署完成后,kubeadm 会生成一行指令:
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.2.45:6443 --token 2qncip.5p98b3zuvi9rumwk --discovery-token-ca-cert-hash sha256:3227d728428eaba7145196d66dc954a554be6a3ae2d32d088632a8561602fd48
这个 kubeadm join 命令,就是用来给这个 Master节点添加更多工作节点(Worker)的命令
kubeadm 还会提示我们第一次使用 Kubernetes集群所需要的配置命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
获取当前节点状态
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-45 NotReady master 5m13s v1.12.2
NodeNotReady 的原因在于,我们尚未部署任何网络插件
通过 kubectl describe 指令的输出可以看到,CoreDNS、kube-controller-manager等依赖于网络的 Pod 都处于 Pending 状态,即调度失败。这当然是符合预期的因为这个 Master 节点的网络尚未就绪.
部署网络插件
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
部署之后状态
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-45 Ready master 6m29s v1.12.2
部署worker节点
启动kubelet
pause镜像
sudo docker pull registry.aliyuncs.com/google_containers/pause:3.1
sudo docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
启动kubelet
sudo systemctl start kubelet
执行部署 Master 节点时生成的 kubeadm join 指令:
kubeadm join 192.168.2.45:6443 --token 2qncip.5p98b3zuvi9rumwk --discovery-token-ca-cert-hash sha256:3227d728428eaba7145196d66dc954a554be6a3ae2d32d088632a8561602fd48
集群状态
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-45 Ready master 26m v1.12.2
k8s-work-46 Ready <none> 8m51s v1.12.2
k8s-work-47 Ready <none> 2m28s v1.12.2
添加角色标签
kubectl label node k8s-work-46 node-role.kubernetes.io/worker=worker
kubectl label node k8s-work-47 node-role.kubernetes.io/worker=worker
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-45 Ready master 27m v1.12.2
k8s-work-46 Ready worker 10m v1.12.2
k8s-work-47 Ready worker 3m52s v1.12.2
原文地址:https://www.cnblogs.com/knmax/p/12141577.html