一、前言 ??
Kubernetes 是Google开源的容器集群管理系统,基于Docker构建一个容器的调度服务,提供资源调度、均衡容灾、服务注册、动态扩缩容等功能套件,目前centos yum源上最新版本为1.5.2。
本文基于Centos7.5构建Kubernetes平台,在正式介绍之前,大家有必要先理解Kubernetes几个核心概念及其承担的功能。
以下为Kubernetes的架构设计图:
1. Pods ??
在Kubernetes系统中,调度的最小颗粒不是单纯的容器,而是抽象成一个Pod,Pod是一个可以被创建、销毁、调度、管理的最小的部署单元。比如一个或一组容器。
2. Replication Controllers ??
Replication Controller是Kubernetes系统中最有用的功能,实现复制多个Pod副本,往往一个应用需要多个Pod来支撑,并且可以保证其复制的副本数,即使副本所调度分配的主宿机出现异常,通过Replication Controller可以保证在其它主宿机启用同等数量的Pod。Replication Controller可以通过repcon模板来创建多个Pod副本,同样也可以直接复制已存在Pod,需要通过Label selector来关联。
3. Services
Services是Kubernetes最外围的单元,通过虚拟一个访问IP及服务端口,可以访问我们定义好的Pod资源,目前的版本是通过iptables的nat转发来实现,转发的目标端口为Kube_proxy生成的随机端口,目前只提供GOOGLE云上的访问调度,如GCE。
4. Labels
Labels是用于区分Pod、Service、Replication Controller的key/value键值对,仅使用在Pod、Service、 Replication Controller之间的关系识别,但对这些单元本身进行操作时得使用name标签。
5. Proxy ??
Proxy不但解决了同一主宿机相同服务端口冲突的问题,还提供了Service转发服务端口对外提供服务的能力,Proxy后端使用了随机、轮循负载均衡算法。
6. Deployment ??
Kubernetes Deployment提供了官方的用于更新Pod和Replica Set(下一代的Replication Controller)的方法Kubernetes Deployment提供了官方的用于更新Pod和Replica Set(下一代的Replication Controller)的方法,您可以在Deployment对象中只描述您所期望的理想状态(预期的运行状态),Deployment控制器为您将现在的实际状态转换成您期望的状态,例如,您想将所有的webapp:v1.0.9升级成webapp:v1.1.0,您只需创建一个Deployment,Kubernetes会按照Deployment自动进行升级。现在,您可以通过Deployment来创建新的资源(pod,rs,rc),替换已经存在的资源等。 ??Deployment集成了上线部署、滚动升级、创建副本、暂停上线任务,恢复上线任务,回滚到以前某一版本(成功/稳定)的Deployment等功能,在某种程度上,Deployment可以帮我们实现无人值守的上线,大大降低我们的上线过程的复杂沟通、操作风险
二、Kubernetes集群部署
1,环境配置说明
etcd 192.168.20.73
master 192.168.20.73
node1 192.168.20.74 node2 192.168.20.75
2,事前准备
关闭防火墙;关闭SELinux;关闭Swap交换分区;master与node之间ssh免密登录;同步NTP时间;所有IP均能访问外网。操作过程略。
1 [[email protected] ~]# docker --version 2 Docker version 1.13.1, build 6e3bb8e/1.13.1 3 [[email protected] ~]# kubectl --version 4 Kubernetes v1.5.2 5 [[email protected] ~]# etcd --version 6 etcd Version: 3.2.22 7 Git SHA: 1674e68 8 Go Version: go1.9.4 9 Go OS/Arch: linux/amd64 10 [[email protected] ~]# cat /etc/redhat-release 11 CentOS Linux release 7.5.1804 (Core) 12 [[email protected] ~]# uname -a 13 Linux master 4.17.6-1.el7.elrepo.x86_64 #1 SMP Wed Jul 11 17:24:30 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
3,安装部署(在master上操作)
1 #每台添加hosts文件 2 192.168.20.73 master 3 192.168.20.73 etcd3 4 192.168.20.74 node1 5 192.168.20.75 node2 6 7 #配置kubernetes源 8 vim kubernetes.repo 9 [kubernetes] 10 name=Kubernetes 11 baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ 12 enabled=1 13 gpgcheck=0 14 repo_gpgcheck=0 15 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 16 17 #master上yum安装etcd kubernetes-master 18 #安装kubernetes时也可以不用区分master或node,直接使用 yum -y install kubernetes 会同时安装master和node包 19 yum -y install etcd kubernetes-master
4,此过程会同时安装docker、kuberlet、kuberctl等工具,非常方便。如果此前安装过docker-ce版本,会提示报错。卸载重装即可。
5,修改配置文件。本机etcd已部署集群模式,所以etcd名称为etcd3,在此可以自定义etcd1 或默认default。
1 [[email protected] kubernetes]# pwd 2 /etc/kubernetes 3 [[email protected] kubernetes]# ls 4 apiserver config controller-manager kubelet proxy scheduler 5 6 #修改etcd配置文件 7 [[email protected] ~]# grep -v ‘^#‘ /etc/etcd/etcd.conf 8 ETCD_NAME=etcd3 9 10 ETCD_DATA_DIR="/var/lib/etcd/etcd3" 11 12 ETCD_LISTEN_PEER_URLS="http://192.168.20.73:2380" 13 14 ETCD_LISTEN_CLIENT_URLS="http://127.0.0.1:2379,http://192.168.20.73:2379" 15 16 ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.20.73:2380" 17 18 ETCD_INITIAL_CLUSTER="etcd3=http://192.168.20.73:2380" 19 20 ETCD_INITIAL_CLUSTER_STATE="new" 21 22 ETCD_INITIAL_CLUSTER_TOKEN="etcd-test" 23 24 ETCD_ADVERTISE_CLIENT_URLS="http://192.168.20.73:2379" 25 26 #启动服务 27 systemctl start etcd 28 systemcet enable etcd 29 30 # 检查etcd cluster状态 31 [[email protected] ~]# etcdctl cluster-health 32 member ec71f609370df393 is healthy: got healthy result from http://192.168.20.73:2379 33 cluster is healthy 34 #检查etcd成员列表,由于是单节点部署,只有一台 35 [[email protected] ~]# etcdctl member list 36 ec71f609370df393: name=etcd3 peerURLs=http://192.168.20.73:2380 clientURLs=http://192.168.20.73:2379 isLeader=true
6,etcd集群模式cluster状态为:(etcd集群模式,参加另一篇文章。)
1 [[email protected] ~]# etcdctl cluster-health 2 member 85b5f1a0537e385d is healthy: got healthy result from http://192.168.20.71:2379 3 member 9f304c9e0feb949d is healthy: got healthy result from http://192.168.20.72:2379 4 member ec71f609370df393 is healthy: got healthy result from http://192.168.20.73:2379 5 cluster is healthy 6 [[email protected] ~]# etcdctl member list 7 85b5f1a0537e385d: name=etcd1 peerURLs=http://192.168.20.71:2380 clientURLs=http://192.168.20.71:2379 isLeader=false 8 9f304c9e0feb949d: name=etcd2 peerURLs=http://192.168.20.72:2380 clientURLs=http://192.168.20.72:2379 isLeader=false 9 ec71f609370df393: name=etcd3 peerURLs=http://192.168.20.73:2380 clientURLs=http://192.168.20.73:2379 isLeader=true
7,配置master服务
1 #1)kube-apiserver配置文件 2 [[email protected] ~]# cat /etc/kubernetes/config 3 ### 4 KUBE_LOGTOSTDERR="--logtostderr=true" 5 KUBE_LOG_LEVEL="--v=0" 6 KUBE_ALLOW_PRIV="--allow-privileged=false" 7 KUBE_MASTER="--master=http://192.168.20.73:8080" 8 [[email protected] ~]# cat /etc/kubernetes/apiserver 9 ### 10 ## kubernetes system config 11 KUBE_API_ADDRESS="--address=0.0.0.0" 12 KUBE_API_PORT="--port=8080" 13 KUBELET_PORT="--kubelet-port=10250" 14 KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.20.73:2379" 15 KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" 16 KUBE_API_ARGS="--service_account_key_file=/etc/kubernetes/serviceaccount.key" 17 KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" 18 KUBE_API_ARGS="" 19 [[email protected] ~]# 20 21 #2)controller-manager配置文件 22 [[email protected] ~]# cat /etc/kubernetes/controller-manager 23 ### 24 # The following values are used to configure the kubernetes controller-manager 25 26 # defaults from config and apiserver should be adequate 27 28 # Add your own! 29 KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/etc/kubernetes/serviceaccount.key" 30 31 #3)scheduler配置文件 32 [[email protected] ~]# cat /etc/kubernetes/scheduler 33 ### 34 # kubernetes scheduler config 35 36 # default config should be adequate 37 38 # Add your own! 39 KUBE_SCHEDULER_ARGS=""
## 补充说明
#此处会发现,多个serviceaccount.key的配置行,是由于k8s大部分访问都需要证书认证,在后续创建pod时就会报错
#查看 tail -f /var/log/message 发现报以下错误: kube-controller-manager: I0919 17:57:49.292014 24551 event.go:217] Event(api.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"redis-master", UID:"774ac6cd-bbf2-11e8-be6f-00505684252c", APIVersion:"extensions", ResourceVersion:"288298", FieldPath:""}): type: ‘Normal‘ reason: ‘ScalingReplicaSet‘ Scaled up replica set redis-master-1610630896 to 1 Sep 19 17:57:58 master3 kube-controller-manager: I0919 17:57:58.320908 24551 event.go:217] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"redis-master-1610630896", UID:"774d552a-bbf2-11e8-be6f-00505684252c", APIVersion:"extensions", ResourceVersion:"288299", FieldPath:""}): type: ‘Warning‘ reason: ‘FailedCreate‘ Error creating: No API token found for service account "default", retry after the token is automatically created and added to the service account
解决办法:
1)首先生成密钥:
openssl genrsa -out /etc/kubernetes/serviceaccount.key 2048
2)编辑/etc/kubenetes/apiserver
添加以下内容:
KUBE_API_ARGS="--service_account_key_file=/etc/kubernetes/serviceaccount.key"
3)再编辑/etc/kubernetes/controller-manager
添加以下内容:
KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/etc/kubernetes/serviceaccount.key"
不建议修改apiserver的配置文件:
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
删除 ServiceAccount,即为:KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
8,我们继续配置node节点,最后一起启动吧
1 #1)在etcd服务上首先需要添加网络,即在master上 2 [[email protected] ~]# etcdctl set /k8s/network/config ‘{"Network": "10.255.0.0/16"}‘ 3 {"Network": "10.255.0.0/16"} 4 [[email protected] ~]# etcdctl get /k8s/network/config 5 {"Network": "10.255.0.0/16"} 6 7 #此配置是将数据写入到etcd数据目录中的,不会生成实际目录 8 9 #2)配置node1节点,网络控件采用flannel方式 10 #yum 安装kubernetes-node 和 flannel 11 yum -y install kubernetes-node flannel 12 13 #3)安装成功后,修改配置文件 14 [[email protected] ~]# grep -v ‘^#‘ /etc/sysconfig/flanneld 15 FLANNEL_ETCD_ENDPOINTS="http://192.168.20.73:2379" 16 FLANNEL_ETCD_PREFIX="/k8s/network" 17 FLANNEL_OPTIONS="--iface=ens192" 18 # 网卡信息用ip a命令获取 19 20 #4)配置node1 21 [[email protected] ~]# ls /etc/kubernetes/ 22 config kubelet proxy 23 [[email protected] ~]# cat /etc/kubernetes/config 24 ### 25 # kubernetes system config 26 KUBE_LOGTOSTDERR="--logtostderr=true" 27 KUBE_LOG_LEVEL="--v=0" 28 KUBE_ALLOW_PRIV="--allow-privileged=false" 29 KUBE_MASTER="--master=http://192.168.20.73:8080" 30 [[email protected] ~]# cat /etc/kubernetes/kubelet 31 ### 32 KUBELET_ADDRESS="--address=192.168.20.74" 33 KUBELET_PORT="--port=10250" 34 KUBELET_HOSTNAME="--hostname-override=192.168.20.74" 35 KUBELET_API_SERVER="--api-servers=http://192.168.20.73:8080" 36 #KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" 37 KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.200.10/source/pause-amd64:3.1" 38 KUBELET_ARGS="" 39 [[email protected] ~]# cat /etc/kubernetes/proxy 40 ### 41 # kubernetes proxy config 42 43 # default config should be adequate 44 45 # Add your own! 46 KUBE_PROXY_ARGS="0.0.0.0" 47 48 ##备注 kubelet配置文件,默认image下载方式是外网下载,可以修改为自己的harbor地址
9,node2配置与node1相同,唯一需要将ip修改为node2的IP地址即可。
10,激动人心的启动时刻:
1 #首先启动master,之前已启动了etcd服务,可通过ss命令查看端口状态 2 #按顺序启动master上的服务 3 systemctl start kube-apiserver kube-controller-manager kube-scheduler 4 5 #按顺序启动两台node节点 6 systemctl start flanneld kubelet kube-proxy 7 8 #在master上看到node状态 9 [[email protected] ~]# kubectl get nodes 10 NAME STATUS AGE 11 192.168.20.74 Ready 3d 12 192.168.20.75 Ready 3d
至此,kubernetes搭建完成。
kubernetes中文社区:https://www.kubernetes.org.cn/
kubernetes中文社区命令使用:http://docs.kubernetes.org.cn/683.html
原文地址:https://www.cnblogs.com/fuhai0815/p/9687183.html