全文参考了两篇中文文档:
1,https://www.cnblogs.com/RainingNight/p/using-kubeadm-to-create-a-cluster.html
2,http://running.iteye.com/blog/2322634
注意:
运行命令是一定要区分是在master节点还是在pods节点上运行的,有些命令只能在master节点执行,有些命令只能在pods节点执行。这个要区分。
运行命令一定要区分清用户是谁,是root还是普通用户。
大步骤:
1,在master节点和pods节点上安装软件;
2,在master节点上启动kubernetes软件,并初始化master节点;
3,在从节点上启动kubernetes软件,并连接到master节点进行注册;
4,通过master启动一个pods,执行一个应用程序(nginx为例);
5,通过master启动一个服务,将刚才的应用程序关联到这个服务项上;
6,测试master的scale能力,根据服务名瞬间启动一个相同的pods;
1,在master节点和pods节点上安装软件
sudo apt-get update && sudo apt-get install -y apt-transport-https curl -s http://packages.faasx.com/google/apt/doc/apt-key.gpg | sudo apt-key add - sudo cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://mirrors.ustc.edu.cn/kubernetes/apt/ kubernetes-xenial main EOF sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl
2,在master节点上启动kubernetes软件,并初始化master节点;
2.1,在master节点初始化一个cluster
由于网络原因,我们需要提前拉取k8s初始化需要用到的Images,并添加对应的k8s.gcr.io
标签:
## 拉取镜像 docker pull reg.qiniu.com/k8s/kube-apiserver-amd64:v1.10.2 docker pull reg.qiniu.com/k8s/kube-controller-manager-amd64:v1.10.2 docker pull reg.qiniu.com/k8s/kube-scheduler-amd64:v1.10.2 docker pull reg.qiniu.com/k8s/kube-proxy-amd64:v1.10.2 docker pull reg.qiniu.com/k8s/etcd-amd64:3.1.12 docker pull reg.qiniu.com/k8s/pause-amd64:3.1 ## 添加Tag docker tag reg.qiniu.com/k8s/kube-apiserver-amd64:v1.10.2 k8s.gcr.io/kube-apiserver-amd64:v1.10.2 docker tag reg.qiniu.com/k8s/kube-scheduler-amd64:v1.10.2 k8s.gcr.io/kube-scheduler-amd64:v1.10.2 docker tag reg.qiniu.com/k8s/kube-controller-manager-amd64:v1.10.2 k8s.gcr.io/kube-controller-manager-amd64:v1.10.2 docker tag reg.qiniu.com/k8s/kube-proxy-amd64:v1.10.2 k8s.gcr.io/kube-proxy-amd64:v1.10.2 docker tag reg.qiniu.com/k8s/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12 docker tag reg.qiniu.com/k8s/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1 ## 在Kubernetes 1.10 中,增加了CoreDNS,如果使用CoreDNS(默认关闭),则不需要下面三个镜像。 docker pull reg.qiniu.com/k8s/k8s-dns-sidecar-amd64:1.14.10 docker pull reg.qiniu.com/k8s/k8s-dns-kube-dns-amd64:1.14.10 docker pull reg.qiniu.com/k8s/k8s-dns-dnsmasq-nanny-amd64:1.14.10 docker tag reg.qiniu.com/k8s/k8s-dns-sidecar-amd64:1.14.10 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.10 docker tag reg.qiniu.com/k8s/k8s-dns-kube-dns-amd64:1.14.10 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.10 docker tag reg.qiniu.com/k8s/k8s-dns-dnsmasq-nanny-amd64:1.14.10 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.10
2.2,初始化cluster
<master-node>: sudo kubeadm init --pod-network-cidr=192.168.0.0/16注意这里的输出最好能记录在text中,因为后面会用到
2.3,将kubernetes的配置项放到普通用户目录下
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
这样kubectl会自动寻址到config文件,不用依赖一个严格的root可读的config
2.4,安装各种软件(https://docs.projectcalico.org/v3.3/getting-started/kubernetes/)
2.4.1 安装etcd kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/etcd.yaml 2.4.2 安装rbac kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/rbac.yaml 2.4.3 安装calico kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/calico.yaml 2.4.4 确认安装成功 watch kubectl get pods --all-namespaces 结束后Ctrl + C 2.4.5 再次确认 kubectl get nodes -o wide
2.5 在从节点启动kubernetes软件
在从节点上执行:sudo kubeadm join 192.168.0.8:6443 --token vtyk9m.g4afak37myq3rsdi --discovery-token-ca-cert-hash sha256:19246ce11ba3fc633fe0b21f2f8aaaebd7df9103ae47138dc0dd615f61a32d99
这里的命令要和2.2的输出保持一致(几个参数可能不一致,按照自己的输出自行修改即可) 如果2.2的输出已经没法找到,可以用以下命令再次得到join语句。
在主节点上执行: kubeadm token create --print-join-command 然后再在从节点上执行以上得到的join语句
2.6,确认主从节点已经完成启动,需要等几分钟:
主节点上执行 kubectl get nodes
3,创建可用的pod3.1,创建一个nginx的镜像当做pod内的应用程序
主节点上执行:kubectl run my-nginx --image=nginx --replicas=1 --port=80
3.2,确认pod已经生成
主节点上执行: kubectl get pods
3.3,将该pods发布到kubernetes上,作为一个服务
[email protected]:~/download/k8s$ kubectl expose deployment my-nginx --port=8080 --target-port=80 service/my-nginx exposed
3.4,查看服务是否已经生成
[email protected]:~$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 108m my-nginx ClusterIP 10.98.38.80 <none> 8080/TCP 91m
3.5,访问该服务
[email protected]:~$ kubectl describe service/my-nginx Name: my-nginx Namespace: default Labels: run=my-nginx Annotations: <none> Selector: run=my-nginx Type: ClusterIP IP: 10.98.38.80 Port: <unset> 8080/TCP TargetPort: 80/TCP Endpoints: 192.168.244.65:80 Session Affinity: None Events: <none>
获取到IP和port
从节点上执行:[email protected]:~$ curl 10.98.38.80:8080 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
4,扩容该service的pods数4.1,查看扩容前的pods数
[email protected]:~$ kubectl get pods NAME READY STATUS RESTARTS AGE my-nginx-756f645cd7-mg45n 1/1 Running 0 98m
4.2,执行扩容
[email protected]:~$ kubectl scale deployment my-nginx --replicas=2 deployment.extensions/my-nginx scaled
4.3,查看扩容后的pods信息
[email protected]:~$ kubectl get pods NAME READY STATUS RESTARTS AGE my-nginx-756f645cd7-dww7g 0/1 ContainerCreating 0 6s my-nginx-756f645cd7-mg45n 1/1 Running 0 98m
4.4,查看service的信息
[email protected]:~$ kubectl describe service/my-nginx Name: my-nginx Namespace: default Labels: run=my-nginx Annotations: <none> Selector: run=my-nginx Type: ClusterIP IP: 10.98.38.80 Port: <unset> 8080/TCP TargetPort: 80/TCP Endpoints: 192.168.244.65:80,192.168.244.66:80 Session Affinity: None Events: <none>
4.4,反向验证pods和service的对应
[email protected]:~$ kubectl describe pods | grep IP IP: 192.168.244.66 IP: 192.168.244.65
4.5,访问新的service
[email protected]:~$ curl 10.98.38.80:8080 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
5,使用完毕后清理cluster信息
5.1,清除node数据
主节点上执行kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
5.2,删除node节点
主节点上执行kubectl delete node <node name>
5.3,收回cluster信息
在要移除的节点上,执行: sudo kubeadm reset
输出结果:
[email protected]:~$ sudo kubeadm reset [sudo] password for luwenwei: [reset] WARNING: changes made to this host by ‘kubeadm init‘ or ‘kubeadm join‘ will be reverted. [reset] are you sure you want to proceed? [y/N]: y [preflight] running pre-flight checks [reset] stopping the kubelet service [reset] unmounting mounted directories in "/var/lib/kubelet" [reset] no etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd [reset] please manually reset etcd to prevent further issues [reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes] [reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
原文地址:https://www.cnblogs.com/helww/p/10040819.html