背景
kubernetes已经是现有的docker容器管理工具中必学的一个架构了,相对与swarm来说,它的架构更重,组件和配置也更复杂,当然了,提供的功能也更加强大。在这里,k8s的基本概念和架构就不描述了,网上有很多的资料可供参考。
在技术的驱使下,我们公司也不可避免地开始了k8s的研究,所以也要开始接触到这一强大的docker容器管理架构。学习k8s的第一步,首先要搭建一个k8s的集群环境。搭建k8s最简单的应该是直接使用官方提供的二进制包。但在这里,我参考了k8s官方的安装指南,选择使用kubadem的方式来安装。
安装环境
Master:192.168.232.130
Node1:192.168.232.131
Node2:192.168.232.129
安装步骤
1, 初始化系统,安装kubernetes所需的相关程序(所有master和node节点)
添加kubernetes相关的yum库资源,国内可使用阿里云的镜像:
# vi /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0
关闭selinux
# setenforce 0
安装kubernet所需的相关组件:
# yum install -y docker kubelet kubeadm kubectl kubernetes-cni # systemctl enable docker && systemctl start docker # systemctl enable kubelet && systemctl start kubelet
2,从国内镜像仓库下载kubnetnets所需的相关docker镜像到本地,节省初始化master和node下载镜像的时间,也是为了避免由于网络问题导致镜像下载失败的错误。
Master:
#下载master镜像到master节点 # docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-apiserver-amd64:v1.10.0 # docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-scheduler-amd64:v1.10.0 # docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-controller-manager-amd64:v1.10.0 # docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-proxy-amd64:v1.10.0 # docker pull registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-kube-dns-amd64:1.14.8 # docker pull registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-dnsmasq-nanny-amd64:1.14.8 # docker pull registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-sidecar-amd64:1.14.8 # docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/etcd-amd64:3.1.12 # docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.10.0-amd64 # docker pull registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 修改以上下载的镜像的tag,因为使用kubadem初始化master或者node时,默认使用的是k8s.gcr.io地址的镜像。 # docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0 # docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0 # docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0 # docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0 # docker tag registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 # docker tag registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 # docker tag registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 # docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12 # docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 # docker tag registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
Nodes:
下载nodes所需的镜像到node节点 # docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-proxy-amd64:v1.10.0 # docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.10.0-amd64 # docker pull registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 修改镜像tag名 # docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0 # docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 # docker tag registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
3,初始化master
初始化kubernets需关闭swap,否则会报错:[ERROR Swap]: running with swap on is not supported. Please disable swap。
# swapoff -a
初始化kubernets master:(192.168.123.130)
# kubeadm init --pod-network-cidr 10.244.0.0/16 --kubernetes-version 1.10.0 [init] Using Kubernetes version: v1.10.0 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING Hostname]: hostname "k8s-node1" could not be reached [WARNING Hostname]: hostname "k8s-node1" lookup k8s-node1 on 192.168.232.2:53: no such host [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [preflight] Starting the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.232.130] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [k8s-node1] and IPs [192.168.232.130] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 70.502031 seconds [uploadconfig]?Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node k8s-node1 as master by adding a label and a taint [markmaster] Master k8s-node1 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: z3njv7.6vndmsyesgp9bozf [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.232.130:6443 --token z3njv7.6vndmsyesgp9bozf --discovery-token-ca-cert-hash sha256:2959ed1b5e23b5576709c26d14f2c32a15323971f3ade2c3fc3c85c80047350f
以上信息说明master已经初始化成功,可以看到master所需的一些组件已经被创建,比如kube-apiserver,kube-controller-manager,kube-scheduler,etcd。而kubadem创建的这些组件,都是以docker容器的形式创建存在,并提供服务的。
而nodes节点可以通过master提示的kubadem join命令,来加入kubernetes集群
4,安装flannel网络组件:
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created
5:安装nodes节点(192.168.123.130/192.168.123.129)
node节点也需先关闭swap
# swapoff -a
利用master提供的kubadem join命令,加入k8s集群:
# kubeadm join 192.168.232.130:6443 --token z3njv7.6vndmsyesgp9bozf --discovery-token-ca-cert-hash sha256:2959ed1b5e23b5576709c26d14f2c32a15323971f3ade2c3fc3c85c80047350f [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [preflight] Starting the kubelet service [discovery] Trying to connect to API Server "192.168.232.130:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.232.130:6443" [discovery] Requesting info from "https://192.168.232.130:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.232.130:6443" [discovery] Successfully established connection with API Server "192.168.232.130:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
同样的,nodes节点会通过之前下载的镜像,创建出nodes节点所需的服务组件:例如kube-proxy:
6:检查k8s集群状态
在master和nodes节点的相关服务启动完成之后,可以通过在master上运行'kubectl get nodes'来查看集群状态:
master和nodes节点的状态如果都为Ready,说明该k8s已经可以正常工作了。
原文地址:http://blog.51cto.com/icenycmh/2121370