centos7 k8s集群部署

安装k8s集群前期准备:
网络环境:
节点 主机名 ip
Master k8s_master 192.168.3.216
Node1 k8s_node1 192.168.3.217
Node2 k8s_node2 192.168.3.219

centos7版本:
[[email protected]_master ~]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)

关闭firewalld:
systemctl stop firewalld
systemctl disable firewalld

三台主机基础服务安装:
[[email protected]_master ~]#yum -y update
[[email protected]_master ~]#yum -y install net-tools wget vim ntpd
[[email protected]_master ~]#systemctl enable ntpd
[[email protected]_master ~]#systemctl start ntpd

分别在三台主机,设置主机名:
Master
hostnamectl --static set-hostname k8s_master
Node1
hostnamectl --static set-hostname k8s_client1
Node2
hostnamectl --static set-hostname k8s_client2

设置hosts,分别再三台主机执行:
cat <<EOF > /etc/hosts
192.168.3.217 k8s_client1
192.168.3.219 k8s_client2
192.168.3.216 k8s_master
EOF

部署Master操作:
安装etcd服务:
[[email protected]_master ~]# yum -y install etcd

编辑配置文件 /etc/etcd/etcd.conf
[[email protected]_master ~]# cat /etc/etcd/etcd.conf | grep -v "^#"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
ETCD_NAME="master"
ETCD_ADVERTISE_CLIENT_URLS="http://k8s_master:2379,http://k8s_master:4001"

设置开机启动并验证状态
[[email protected]_master ~]#systemctl enable etcd
[[email protected]_master ~]#systemctl start etcd

[[email protected]_master ~]# etcdctl -C http://k8s_master:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://k8s_master:2379
cluster is healthy
[[email protected]_master ~]# etcdctl -C http://k8s_master:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://k8s_master:2379
cluster is healthy

安装docker服务
[[email protected]_master ~]# yum -y install docker
设置开机启动,开启服务:
[[email protected]_master ~]#systemctl enable docker
[[email protected]_master ~]#systemctl start docker
查看docker版本:
[[email protected]_master ~]# docker version
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
Go version: go1.8.3
Git commit: 3e8e77d/1.12.6
Built: Tue Jan 30 09:17:00 2018
OS/Arch: linux/amd64

Server:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
Go version: go1.8.3
Git commit: 3e8e77d/1.12.6
Built: Tue Jan 30 09:17:00 2018
OS/Arch: linux/amd64

安装kubernetes
[[email protected]_master ~]# yum install kubernetes

在kubernetes master上需要运行以下组件:
    Kubernets API Server
    Kubernets Controller Manager
    Kubernets Scheduler

修改apiserver服务配置文件:
[[email protected]_master ~]# cat /etc/kubernetes/apiserver | grep -v "^#"
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.3.216:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS=""

修改config配置文件:
[[email protected]_master ~]# cat /etc/kubernetes/config | grep -v "^#"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.3.216:8080"

设置开机启动,开启服务
[[email protected]_master ~]#systemctl enable kube-apiserver kube-controller-manager kube-scheduler
[[email protected]_master ~]#systemctl start kube-apiserver kube-controller-manager kube-scheduler

查看服务端口:
[[email protected]_master ~]# netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2380 0.0.0.0: LISTEN 973/etcd
tcp 0 0 0.0.0.0:22 0.0.0.0:
LISTEN 970/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0: LISTEN 1184/master
tcp6 0 0 :::6443 :::
LISTEN 1253/kube-apiserver
tcp6 0 0 :::2379 ::: LISTEN 973/etcd
tcp6 0 0 :::10251 :::
LISTEN 675/kube-scheduler
tcp6 0 0 :::10252 ::: LISTEN 674/kube-controller
tcp6 0 0 :::8080 :::
LISTEN 1253/kube-apiserver
tcp6 0 0 :::22 ::: LISTEN 970/sshd
tcp6 0 0 ::1:25 :::
LISTEN 1184/master
tcp6 0 0 :::4001 :::* LISTEN 973/etcd

部署Node:
安装docker
参考Master安装方法
安装kubernetes
参考Master安装方法
配置、启动kubernetes
node节点上需要运行一下组件
kubelet kube-proxy

Node节点主机做以下配置:
config:
[[email protected]_client1 ~]# cat /etc/kubernetes/config | grep -v "^#"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.3.216:8080"
kubelet:
[[email protected]_client1 ~]# cat /etc/kubernetes/kubelet | grep -v "^#"
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=192.168.3.217"
KUBELET_API_SERVER="--api-servers=http://192.168.3.216:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""

设置开机启动、开启服务
[[email protected]_client1 ~]#
[[email protected]_client1 ~]# systemctl enable kubelet kube-proxy
[[email protected]_client1 ~]# systemctl start kubelet kube-proxy

查看端口:
[[email protected]_client1 ~]# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0: LISTEN 942/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:
LISTEN 2258/master
tcp 0 0 127.0.0.1:10248 0.0.0.0: LISTEN 17932/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:
LISTEN 17728/kube-proxy
tcp6 0 0 :::10250 ::: LISTEN 17932/kubelet
tcp6 0 0 :::10255 :::
LISTEN 17932/kubelet
tcp6 0 0 :::22 ::: LISTEN 942/sshd
tcp6 0 0 ::1:25 :::
LISTEN 2258/master
tcp6 0 0 :::4194 :::* LISTEN 17932/kubelet

Master上查看集群中的节点及节点状态
[[email protected]_master ~]# kubectl get node
NAME STATUS AGE
127.0.0.1 NotReady 1d
192.168.3.217 Ready 1d
192.168.3.219 Ready 1d
[[email protected]_master ~]# kubectl -s http://k8s_master:8080 get node
NAME STATUS AGE
127.0.0.1 NotReady 1d
192.168.3.217 Ready 1d
192.168.3.219 Ready 1d

kubernetes集群搭建完成,还需flannel安装
flannel是CoreOS提供用于解决Dokcer集群跨主机通讯的覆盖网络工具。它的主要思路是:预先留出一个网段,每个主机使用其中一部分,然后每个容器被分配不同的ip;让所有的容器认为大家在同一个直连的网络,底层通过UDP/VxLAN等进行报文的封装和转发。

Master/Node上flannel安装:
[[email protected]_master ~]#yum install flannel

flannel配置:
Master/Node上修改/etc/sysconfig/flanneld

Master:
[[email protected]_master ~]# cat /etc/sysconfig/flanneld | grep -v "^#"
FLANNEL_ETCD_ENDPOINTS="http://192.168.3.216:2379"
FLANNEL_ETCD_PREFIX="/atomic.io/network"

Node:
[[email protected]_client1 ~]# cat /etc/sysconfig/flanneld | grep -v "^#"
FLANNEL_ETCD_ENDPOINTS="http://192.168.3.216:2379"
FLANNEL_ETCD_PREFIX="/atomic.io/network"

添加网络:
[[email protected]_master ~]#etcdctl mk //atomic.io/network/config ‘{"Network":"172.8.0.0/16"}‘

Master/Node设置服务开机启动
[[email protected]_master ~]# systemctl enable flanneld
[[email protected]_master ~]# systemctl start flanneld

Master/Node节点重启服务:
Master:
for SERVICES in docker kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICES ; done

Node:
[[email protected]_client1 ~]#systemctl restart kube-proxy kubelet docker

查看flannel网络:
Master节点:
[[email protected]_master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:50:56:98:3b:d4 brd ff:ff:ff:ff:ff:ff
inet 192.168.3.216/24 brd 192.168.3.255 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe98:3bd4/64 scope link
valid_lft forever preferred_lft forever
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
link/none
inet 10.8.57.0/16 scope global flannel0
valid_lft forever preferred_lft forever
inet6 fe80::3578:6e81:8dc9:ed82/64 scope link flags 800
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:8b:7c:fd:8d brd ff:ff:ff:ff:ff:ff
inet 10.8.57.1/24 scope global docker0
valid_lft forever preferred_lft forever

Node节点:
[[email protected]_client1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:50:56:98:65:e0 brd ff:ff:ff:ff:ff:ff
inet 192.168.3.217/24 brd 192.168.3.255 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe98:65e0/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:23:4b:85:6f brd ff:ff:ff:ff:ff:ff
inet 10.8.6.1/24 scope global docker0
valid_lft forever preferred_lft forever
9: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
link/none
inet 10.8.6.0/16 scope global flannel0
valid_lft forever preferred_lft forever
inet6 fe80::827:f63e:34ee:1f8e/64 scope link flags 800
valid_lft forever preferred_lft forever

原文地址:http://blog.51cto.com/jonauil/2084986

时间: 2024-09-30 11:11:32

centos7 k8s集群部署的相关文章

Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)

0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 master 服务器的组件有:kube-apiserver.kube-controller-manager.kube-scheduler 因此需要下载k8s master,下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGE

基于k8s集群部署prometheus监控etcd

目录 基于k8s集群部署prometheus监控etcd 1.背景和环境概述 2.修改prometheus配置 3.检查是否生效 4.配置grafana图形 基于k8s集群部署prometheus监控etcd 1.背景和环境概述 本文中涉及到的环境中.prometheus监控和grafana基本环境已部署好.etcd内置了metrics接口供收集数据,在etcd集群任意一台节点上可通过ip:2379/metrics检查是否能正常收集数据. curl -L http://localhost:237

rancher三节点k8s集群部署例子

rancher三节点k8s集群部署例子 待办 https://rorschachchan.github.io/2019/07/25/使用Rancher2-1部署k8s/ 原文地址:https://www.cnblogs.com/lishikai/p/12310449.html

k8s集群部署(node1 ,node2 ,node3)

环境规划: 1.三台节点 2.内存2G 3.CPU 2 4.交换分区必须关闭 5.selinux必须关闭 6.每个节点必须部署docker 7.主机名解析 第一步:每个节点下载docker(步骤在docker第一篇)第二步:每个节点关闭交换分区 # swapoff -a # vim /etc/fstab 注释掉 swap 行 第三步:主机名解析 第四步:上传k8s安装包并安装 # lscri-tools-1.13.0-0.x86_64.rpm kubectl-1.15.2-0.x86_64.rp

k8s集群部署

环境: 两台虚拟机, 10.10.20.203 部署docker.etcd.flannel.kube-apiserver.kube-controller-manager.kube-scheduler 10.10.20.206 部署docker.flannel.kubelet.kube-proxy [etcd] 1.下载curl -L https://github.com/coreos/etcd/releases/download/v2.3.7/etcd-v2.3.7-linux-amd64.ta

kubeadm部署高可用K8S集群(v1.14.0)

一. 集群规划 主机名 IP 角色 主要插件 VIP 172.16.1.10 实现master高可用和负载均衡 k8s-master01 172.16.1.11 master kube-apiserver.kube-controller.kube-scheduler.kubelet.kube-proxy.kube-flannel.etcd k8s-master02 172.16.1.12 master kube-apiserver.kube-controller.kube-scheduler.k

使用Kubeadm创建k8s集群之节点部署(三十一)

前言 本篇部署教程将讲述k8s集群的节点(master和工作节点)部署,请先按照上一篇教程完成节点的准备.本篇教程中的操作全部使用脚本完成,并且对于某些情况(比如镜像拉取问题)还提供了多种解决方案.不过基于部署环境和k8s的复杂性,我们需要对k8s集群部署过程中的一些步骤都有所了解,尤其是“kubeadm init”命令. 目录 主节点部署  Kubeadm以及相关工具包的安装 批量拉取k8s相关镜像 使用“kubeadm init”启动k8s主节点 启动k8s主节点 kubectl认证 安装f

Kubernetes集群部署篇( 一)

K8S集群部署有几种方式:kubeadm.minikube和二进制包.前两者属于自动部署,简化部署操作,我们这里强烈推荐初学者使用二进制包部署,因为自动部署屏蔽了很多细节,使得对各个模块感知很少,非常不利用学习.所以,这篇文章也是使用二进制包部署Kubernetes集群. 一.架构拓扑图 二.环境规划 角色 IP 主机名 组件 Master1 192.168.161.161 master1 etcd1,master1 master2 192.168.161.162 master2 etcd2,m

k8s集群启动了上万个容器(一个pod里放上百个容器,起百个pod就模拟出上万个容器)服务器超时,无法操作的解决办法

问题说明: 一个POD里放了百个容器,然后让K8S集群部署上百个POD,得到可运行上万个容器的实验目的. 实验环境:3台DELL裸机服务器,16核+64G,硬盘容量忽略吧,上T了,肯定够. 1.一开始运行5000多个容器的时候(也就50个POD),集群部署后,10几分钟就起来了,感觉还不错. 2.增加压力,把50个POD增加到100个POD,感觉也不会很长时间,都等到下班后又过了半个小时,还是没有起来,集群链接缓慢,使用kubect里面的命令,好久都出不来信息,UI界面显示服务器超时. 心想,完