(1).配置说明
节点角色 | IP地址 | CPU | 内存 |
master、etcd | 192.168.128.110 | 4核 | 2G |
node1/minion1 | 192.168.128.111 | 4核 | 2G |
node2/minion2 | 192.168.128.112 | 4核 | 2G |
(2).搭建Kubernetes容器集群管理系统
1)三台主机安装常用的软件包
bash-completion可以使得按<Tab>键补齐,vim是vi编辑器的升级版,wget用于下载阿里云的yum源文件。
# yum -y install bash-completion vim wget
2)三台主机配置阿里云yum源
# mkdir /etc/yum.repos.d/backup # mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/backup/ # wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo # yum clean all && yum list
3)修改hosts文件
[[email protected] ~]# vim /etc/hosts 192.168.128.110 kube-master 192.168.128.110 etcd 192.168.128.111 kube-node1 192.168.128.112 kube-node2 [[email protected] ~]# scp /etc/hosts 192.168.128.111:/etc/ [[email protected] ~]# scp /etc/hosts 192.168.128.112:/etc/
4)在master和etcd节点上安装组件并进行配置
首先安装master节点上的K8s组件
[[email protected] ~]# yum install -y kubernetes etcd flannel ntp
关闭防火墙或开启K8s组件相应端口,etcd端口默认2379,网页端口默认8080
[[email protected] ~]# systemctl stop firewalld && systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. #添加端口方法 [[email protected] ~]# firewall-cmd --permanent --zone=public --add-port={2379,8080}/tcp success [[email protected] ~]# firewall-cmd --reload success [[email protected] ~]# firewall-cmd --zone=public --list-ports 2379/tcp 8080/tcp
修改etcd的配置文件,并启动查看
[[email protected] ~]# vim /etc/etcd/etcd.conf ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #第3行,存储数据目录 ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://192.168.128.110:2379" #第6行,etcd对外服务监听地址,默认端口2379。如果设置为0.0.0.0则监听所有接口 ETCD_NAME="default" #第9行,节点名称。如果存储集群只有一个节点,这一项可以注释,默认为default。 ETCD_ADVERTISE_CLIENT_URLS="http://192.168.128.110:2379" #第21行 [[email protected] ~]# systemctl start etcd && systemctl enable etcd Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service. [[email protected] ~]# systemctl status etcd ● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled) Active: active (running) since 二 2020-01-14 14:02:31 CST; 11min ago Main PID: 12573 (etcd) CGroup: /system.slice/etcd.service └─12573 /usr/bin/etcd --name=default --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=http://localhost:2379,http://192.168.128.110:2379 1月 14 14:02:31 kube-master etcd[12573]: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2 1月 14 14:02:31 kube-master etcd[12573]: setting up the initial cluster version to 3.3 1月 14 14:02:31 kube-master etcd[12573]: set the initial cluster version to 3.3 1月 14 14:02:31 kube-master etcd[12573]: enabled capabilities for version 3.3 1月 14 14:02:31 kube-master etcd[12573]: published {Name:default ClientURLs:[http://192.168.128.110:2379]} to cluster cdf818194e3a8c32 1月 14 14:02:31 kube-master etcd[12573]: ready to serve client requests 1月 14 14:02:31 kube-master etcd[12573]: ready to serve client requests 1月 14 14:02:31 kube-master etcd[12573]: serving insecure client requests on 192.168.128.110:2379, this is strongly discouraged! 1月 14 14:02:31 kube-master etcd[12573]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged! 1月 14 14:02:31 kube-master systemd[1]: Started Etcd Server. [[email protected] ~]# yum -y install net-tools #需要使用到网络工具 [[email protected] ~]# netstat -antup | grep 2379 tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 12573/etcd tcp 0 0 192.168.128.110:2379 0.0.0.0:* LISTEN 12573/etcd tcp 0 0 192.168.128.110:2379 192.168.128.110:49240 ESTABLISHED 12573/etcd tcp 0 0 127.0.0.1:2379 127.0.0.1:35638 ESTABLISHED 12573/etcd tcp 0 0 192.168.128.110:49240 192.168.128.110:2379 ESTABLISHED 12573/etcd tcp 0 0 127.0.0.1:35638 127.0.0.1:2379 ESTABLISHED 12573/etcd [[email protected] ~]# etcdctl cluster-health 检查etcd cluster状态 member 8e9e05c52164694d is healthy: got healthy result from http://192.168.128.110:2379 cluster is healthy [[email protected] ~]# etcdctl member list 检查etcd集群成员 8e9e05c52164694d: name=default peerURLs=http://localhost:2380 clientURLs=http://192.168.128.110:2379 isLeader=true
修改master的通用配置文件
[[email protected] ~]# vim /etc/kubernetes/config KUBE_LOGTOSTDERR="--logtostderr=true" #第13行,错误日志是否输出到标准错误,如果不输出到标准错误则记录到文件 KUBE_LOG_LEVEL="--v=0" #第16行,日志等级 KUBE_ALLOW_PRIV="--allow-privileged=false" #第19行,是否允许运行特权容器,false表示不允许 KUBE_MASTER="--master=http://192.168.128.110:8080" #第22行
修改API Server的配置文件
[[email protected] ~]# vim /etc/kubernetes/apiserver KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" #第8行,API Server监听所有端口 KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.128.110:2379" #第17行,etcd存储地址 KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" #第20行,IP地址取值范围,提供给Pod和Service #默认允许接入的模块如下:NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota KUBE_ADMISSION_CONTROL="--admission-control=AlwaysAdmit" #第23行,允许哪些模块接入,这里不做限制。 KUBE_API_ARGS="" #第26行
Controller Manager的配置文件不需要修改,可以查看一下
[[email protected] ~]# vim /etc/kubernetes/controller-manager KUBE_CONTROLLER_MANAGER_ARGS="" [[email protected] ~]# rpm -qf /etc/kubernetes/controller-manager kubernetes-master-1.5.2-0.7.git269f928.el7.x86_64
修改Scheduler的配置文件
[[email protected] ~]# vim /etc/kubernetes/scheduler KUBE_SCHEDULER_ARGS="--address=0.0.0.0"
修改flanneld(覆盖网络)的配置文件
[[email protected] ~]# vim /etc/sysconfig/flanneld FLANNEL_ETCD_ENDPOINTS="http://192.168.128.110:2379" #第4行,etcd存储地址 FLANNEL_ETCD_PREFIX="/k8s/network" #第8行,etcd存储配置目录 FLANNEL_OPTIONS="--iface=ens33" #第11行,指定通信物理网卡 [[email protected] ~]# mkdir -p /k8s/network [[email protected] ~]# etcdctl set /k8s/network/config ‘{"Network":"10.255.0.0/16"}‘ #将IP取值范围填入 {"Network":"10.255.0.0/16"} [[email protected] ~]# etcdctl get /k8s/network/config #如此一来,后面node节点上运行的flanneld会自动获取docker的IP地址 {"Network":"10.255.0.0/16"} [[email protected] ~]# systemctl start flanneld && systemctl enable flanneld Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service. Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service. [[email protected] ~]# ip a sh #成功 ...... 3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500 link/none inet 10.255.29.0/16 scope global flannel0 valid_lft forever preferred_lft forever inet6 fe80::723e:875f:5995:76d0/64 scope link flags 800 valid_lft forever preferred_lft forever
重启Master上的API Server、Controller Manager和Scheduler,并设置开机自启。注意:可以在每个组件配置完成时单独操作,也可以修改完一次性操作。
[[email protected] ~]# systemctl restart kube-apiserver kube-controller-manager kube-scheduler [[email protected] ~]# systemctl enable kube-apiserver kube-controller-manager kube-scheduler Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service. Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service. Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
5)在node1/minion1节点上安装组件并进行配置
首先安装node1/minion1节点上的K8s组件
[[email protected] ~]# yum -y install kubernetes flannel ntp
关闭防火墙或开启K8s组件相应端口,kube-proxy默认端口10249,kubelet默认端口10248、10250、10255
[[email protected] ~]# systemctl stop firewalld && systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
配置网络,这里采用flanneld(覆盖网络),然后重启flaneld并设置开机自启
[[email protected] ~]# vim /etc/sysconfig/flanneld FLANNEL_ETCD_ENDPOINTS="http://192.168.128.110:2379" #etcd存储地址 FLANNEL_ETCD_PREFIX="/k8s/network" #etcd存储目录 FLANNEL_OPTIONS="--iface=ens33" #使用通信的物理网卡 [[email protected] ~]# systemctl restart flanneld && systemctl enable flanneld Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service. Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
修改k8s通用配置
[[email protected] ~]# vim /etc/kubernetes/config KUBE_MASTER="--master=http://192.168.128.110:8080" #第22行,指向master节点
查看一下kube-proxy,因为默认监听所有IP,所以不用修改。
[[email protected] ~]# grep -v ‘^#‘ /etc/kubernetes/proxy KUBE_PROXY_ARGS="" #默认监听所有IP
修改kubelet的配置文件。说明:KUBELET_POD_INFRA_CONTAINER指定Pod基础镜像地址。这是一个基础镜像,每个Pod启动时都会启动通过该镜像生成一个容器,如果本地没有这个镜像,那么kubelet将会通过外网下载镜像。
[[email protected] ~]# vim /etc/kubernetes/kubelet KUBELET_ADDRESS="--address=0.0.0.0" #第5行,监听所有IP,因为要使用kubectl远程连接到kubelet,查看Pod及其内的容器状态 KUBELET_HOSTNAME="--hostname-override=kube-node1" #第11行,修改为主机名,加快速度 KUBELET_API_SERVER="--api-servers=http://192.168.128.110:8080" #第14行,指向API Server KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" #第17行,指定Pod基础镜像地址 KUBELET_ARGS="" 第20行
重启kube-proxy、kubelet和docker(其实都没有启动),并设置开机自启
[[email protected] ~]# systemctl restart kube-proxy kubelet docker [[email protected] ~]# systemctl enable kube-proxy kubelet docker Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service. Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
检查一下
[[email protected] ~]# ip a sh ...... 3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500 link/none inet 10.255.42.0/16 scope global flannel0 valid_lft forever preferred_lft forever inet6 fe80::a721:7a65:54ea:c2b/64 scope link flags 800 valid_lft forever preferred_lft forever 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:5c:5b:ae:8c brd ff:ff:ff:ff:ff:ff inet 10.255.42.1/24 scope global docker0 valid_lft forever preferred_lft forever [[email protected] ~]# yum -y install net-tools [[email protected] ~]# netstat -antup | grep proxy tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 1473/kube-proxy tcp 0 0 192.168.128.111:55342 192.168.128.110:8080 ESTABLISHED 1473/kube-proxy tcp 0 0 192.168.128.111:55344 192.168.128.110:8080 ESTABLISHED 1473/kube-proxy [[email protected] ~]# netstat -antup | grep kubelet tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1698/kubelet tcp 0 0 192.168.128.111:55350 192.168.128.110:8080 ESTABLISHED 1698/kubelet tcp 0 0 192.168.128.111:55351 192.168.128.110:8080 ESTABLISHED 1698/kubelet tcp 0 0 192.168.128.111:55354 192.168.128.110:8080 ESTABLISHED 1698/kubelet tcp 0 0 192.168.128.111:55356 192.168.128.110:8080 ESTABLISHED 1698/kubelet tcp6 0 0 :::4194 :::* LISTEN 1698/kubelet tcp6 0 0 :::10250 :::* LISTEN 1698/kubelet tcp6 0 0 :::10255 :::* LISTEN
6)在node2/minion2节点上安装组件并进行配置
重复node1/minion1节点上的操作
7)测试:在Master节点上查看整个集群的运行状态
[[email protected] ~]# kubectl get nodes NAME STATUS AGE kube-node1 Ready 1h kube-node2 Ready 2m
至此K8s容器集群管理系统就搭建完成了,但此时是不能使用Web页面进行管理,只能通过kubectl命令。
原文地址:https://www.cnblogs.com/diantong/p/12187745.html