k8s二进制部署

k8s二进制部署

1.环境准备

主机名 ip地址 角色
k8s-master01 10.0.0.10 master
k8s-master02 10.0.0.11 master
k8s-node01 10.0.0.12 node
k8s-node02 10.0.0.13 node

初始化操作

  • 关闭防火墙
  • 关闭selinux
  • 关闭swap
  • 安装ntp使时间同步
  • 配置域名解析
  • 配置免密 k8s-master01 到其他机器。
  • 安装docker

2.生成配置CFSSL

CFFSL能够构建本地CA,生成后面需要使用的证书

再k8s-master01上操作

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64chmod +x cfssl_linux-amd64sudo mv cfssl_linux-amd64 /root/local/bin/cfssl?wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64chmod +x cfssljson_linux-amd64sudo mv cfssljson_linux-amd64 /root/local/bin/cfssljson?wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64chmod +x cfssl-certinfo_linux-amd64sudo mv cfssl-certinfo_linux-amd64 /root/local/bin/cfssl-certinfo?export PATH=/root/local/bin:$PATH

3.配置ansible

编辑k8s-master01 机器上的/etc/ansible/hosts 加入以下内容

k8s-master01k8s-master02k8s-node01k8s-node02[master]k8s-master01k8s-master02[node]k8s-node01k8s-node02[other]k8s-master02k8s-node01k8s-node02

4.升级内核

因为3.10版本内核且缺少 ip_vs_fo.ko 模块,将导致 kube-proxy 无法开启ipvs模式。ip_vs_fo.ko 模块的最早版本为3.19版本,这个内核版本在 RedHat 系列发行版的常见RPM源中是不存在的。

[[email protected] ~]# ansible all -a ‘rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm‘?[[email protected] ~]# ansible all -a ‘yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml -y‘?[[email protected] ~]# ansible all  -a ‘uname -r‘?

安装好之后重启,选择升级的后的内核加载

[[email protected] ~]# ansible all -a ‘modprobe ip_vs_fo‘[[email protected] ~]# ansible all -m raw -a "lsmod|grep vs"

5.创建安装目录

[[email protected] ~]# ansible all -m raw -a ‘mkdir /home/work/_app/k8s/etcd/{bin,cfg,ssl} -p‘?[[email protected] ~]# ansible all  -a ‘ls /home/work/_app/k8s/etcd/‘

6.创建etcd证书

[[email protected] ~]#  mkdir -p /home/work/_src/ssl_etcd[[email protected] ~]#  cd /home/work/_src/ssl_etcd

6.1生成 ETCD SERVER 证书用到的JSON请求文件

[[email protected] ssl_etcd]# cat << EOF | tee ca-config.json{  "signing": {    "default": {      "expiry": "87600h"    },    "profiles": {      "etcd": {         "expiry": "87600h",         "usages": [            "signing",            "key encipherment",            "server auth",            "client auth"        ]      }    }  }}EOF## 说明 生成ca-config.json默认策略,指定了证书的有效期是10年(87600h)etcd策略,指定了证书的用途signing, 表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUEserver auth:表示 client 可以用该 CA 对 server 提供的证书进行验证client auth:表示 server 可以用该 CA 对 client 提供的证书进行验证

6.2创建 ETCD CA 证书配置文件

[[email protected] ssl_etcd]# cat << EOF | tee ca-csr.json{    "CN": "etcd CA",    "key": {        "algo": "rsa",        "size": 2048    },    "names": [        {            "C": "CN",            "L": "Beijing",            "ST": "Beijing"        }    ]}EOF?#生成ca-csr.json

6.3 创建 ETCD SERVER 证书配置文件

[[email protected] ssl_etcd]# cat << EOF | tee server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "10.0.0.10",
    "10.0.0.11",
    "10.0.0.12",
    "10.0.0.13"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

#生成server-csr.json

6.4 生成ETCD CA证书和私钥

[[email protected] ssl_etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2019/10/29 15:19:05 [INFO] generating a new CA key and certificate from CSR
2019/10/29 15:19:05 [INFO] generate received request
2019/10/29 15:19:05 [INFO] received CSR
2019/10/29 15:19:05 [INFO] generating key: rsa-2048
2019/10/29 15:19:06 [INFO] encoded CSR
2019/10/29 15:19:06 [INFO] signed certificate with serial number 118073519875290282867413793117201542018807809668

#生成 ca.csr ca-key.pem  ca.pem

6.5 生成 ETCD SERVER 证书和私钥

[[email protected] ssl_etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server
2019/10/29 15:20:40 [INFO] generate received request
2019/10/29 15:20:40 [INFO] received CSR
2019/10/29 15:20:40 [INFO] generating key: rsa-2048
2019/10/29 15:20:40 [INFO] encoded CSR
2019/10/29 15:20:40 [INFO] signed certificate with serial number 567338985519564573573112117172193680391181260406
2019/10/29 15:20:40 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[[email protected] ssl_etcd]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem

# 生成 server.csr server-key.pem  server.pem
  • 将生成的证书,复制到证书目录
[[email protected] ssl_etcd]# cp *.pem /home/work/_app/k8s/etcd/ssl/

7.安装ETCD

7.1下载ETCD

[[email protected] _src]# cd /home/work/_src
[[email protected] _src]# wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
[[email protected] _src]# tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
[[email protected] _src]# cd etcd-v3.3.10-linux-amd64
[[email protected] etcd-v3.3.10-linux-amd64]# cp etcd etcdctl /home/work/_app/k8s/etcd/bin/

7.2创建ETCD系统启动文件

创建 /usr/lib/systemd/system/etcd.service 文件并保存,内容如下:

[[email protected] _src]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/home/work/_app/k8s/etcd/cfg/etcd.conf
ExecStart=/home/work/_app/k8s/etcd/bin/etcd --name=${ETCD_NAME} --data-dir=${ETCD_DATA_DIR} --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} --initial-cluster=${ETCD_INITIAL_CLUSTER} --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} --initial-cluster-state=new --cert-file=/home/work/_app/k8s/etcd/ssl/server.pem --key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem --peer-cert-file=/home/work/_app/k8s/etcd/ssl/server.pem --peer-key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem --trusted-ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem --peer-trusted-ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

7.3 将 ETCD 启动文件、证书文件、系统启动文件分发到其他节点

[[email protected] _app]# ansible other -m copy -a "src=/home/work/_app/k8s/etcd dest=/home/work/_app/k8s/ mode=0744"
[[email protected] _app]# ansible other -a "ls /home/work/_app/k8s/etcd"

[[email protected] _app]# ansible other -m copy -a "src=/usr/lib/systemd/system/etcd.service dest=/usr/lib/systemd/system/"
[[email protected] _app]# ansible other -a "ls /usr/lib/systemd/system/etcd.service"

7.4 创建ETCD主配置文件

 在 k8s-master01 创建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,内容如下:

[[email protected] _src]# cat << EOF | tee /home/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
# ETCD的节点名
ETCD_NAME="etcd00"
# ETCD的数据存储目录
ETCD_DATA_DIR="/home/work/_data/etcd"
# 该节点与其他节点通信时所监听的地址列表,多个地址使用逗号隔开,其格式可以划分为scheme://IP:PORT,这里的scheme可以是http、https
ETCD_LISTEN_PEER_URLS="https://10.0.0.10:2380"
# 该节点与客户端通信时监听的地址列表
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.10:2379"

#[Clustering]
# 该成员节点在整个集群中的通信地址列表,这个地址用来传输集群数据的地址。因此这个地址必须是可以连接集群中所有的成员的。
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.10:2380"
# 配置集群内部所有成员地址,其格式为:ETCD_NAME=ETCD_INITIAL_ADVERTISE_PEER_URLS,如果有多个使用逗号隔开
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.10:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://10.0.0.10:2380,etcd01=https://10.0.0.11:2380,etcd02=https://10.0.0.12:2380,etcd03=https://10.0.0.13:2380"
# 初始化集群token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
# 初始化集群状态,new表示新建
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

k8s-master02 创建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,内容如下:

[[email protected] _src]# cat << EOF | tee /home/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/home/work/_data/etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.11:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.11:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://10.0.0.10:2380,etcd01=https://10.0.0.11:2380,etcd02=https://10.0.0.12:2380,etcd03=https://10.0.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

 在 k8s-node01 创建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,内容如下:

[[email protected] home]# cat << EOF | tee /home/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/home/work/_data/etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.12:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.12:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.12:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.12:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://10.0.0.10:2380,etcd01=https://10.0.0.11:2380,etcd02=https://10.0.0.12:2380,etcd03=https://10.0.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

k8s-node02 创建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,内容如下:

[[email protected] home]#  cat << EOF | tee /home/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/home/work/_data/etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.13:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.13:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.13:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.13:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://10.0.0.10:2380,etcd01=https://10.0.0.11:2380,etcd02=https://10.0.0.12:2380,etcd03=https://10.0.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

7.5启动ETCD服务

[[email protected] ~]# ansible all -m raw -a "systemctl daemon-reload && systemctl enable etcd && systemctl start etcd && systemctl status etcd"

7.6检查ETCD服务运行状态

[[email protected] ~]# /home/work/_app/k8s/etcd/bin/etcdctl --ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/home/work/_app/k8s/etcd/ssl/server.pem --key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem cluster-health
member 3b17aaa147134dd is healthy: got healthy result from https://10.0.0.10:2379
member 55fcbe0adaa45350 is healthy: got healthy result from https://10.0.0.13:2379
member cebdf10928a06f3c is healthy: got healthy result from https://10.0.0.11:2379
member f7a9c20602b8532e is healthy: got healthy result from https://10.0.0.12:2379
cluster is healthy

7.7检查ETCD集群成员信息

[[email protected] ~]# /home/work/_app/k8s/etcd/bin/etcdctl --ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/home/work/_app/k8s/etcd/ssl/server.pem --key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem  member list
3b17aaa147134dd: name=etcd00 peerURLs=https://10.0.0.10:2380 clientURLs=https://10.0.0.10:2379 isLeader=false
55fcbe0adaa45350: name=etcd03 peerURLs=https://10.0.0.13:2380 clientURLs=https://10.0.0.13:2379 isLeader=true
cebdf10928a06f3c: name=etcd01 peerURLs=https://10.0.0.11:2380 clientURLs=https://10.0.0.11:2379 isLeader=false
f7a9c20602b8532e: name=etcd02 peerURLs=https://10.0.0.12:2380 clientURLs=https://10.0.0.12:2379 isLeader=false

8安装Flannel

8.1Flanneld 网络安装

Flannel 实质上是一种“覆盖网络(overlay network)”,也就是将TCP数据包装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VxLAN、AWS VPC和GCE路由等数据转发方式。FlannelKubernetes中用于配置第三层(网络层)网络结构。

Flannel 负责在集群中的多个节点之间提供第 3 层 IPv4 网络。Flannel 不控制容器如何与主机联网,只负责主机之间如何传输流量。但是,Flannel 确实为 Kubernetes 提供了 CNI 插件,并提供了与 Docker 集成的指导。

[^]: 没有 Flanneld 网络,Node节点间的 pod 不能通信,只能 Node 内通信。 有 Flanneld 服务启动时主要做了以下几步的工作: 从 ETCD 中获取 NetWork 的配置信息划分 Subnet,并在 ETCD 中进行注册,将子网信息记录到 /run/flannel/subnet.env

8.2向ETCD集群写入网段信息

[[email protected] _src]# /home/work/_app/k8s/etcd/bin/etcdctl --ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/home/work/_app/k8s/etcd/ssl/server.pem --key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem --endpoints="https://10.0.0.10:2379,https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379"  set /coreos.com/network/config  ‘{ "Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}‘

Flanneld 当前版本 (v0.11.0) 不支持 ETCD v3,所以使用 ETCD v2 API 写入配置 key 和网段数据; 写入的 Pod 网段 ${CLUSTER_CIDR} 必须是 /16 段地址,必须与 kube-controller-manager 的 –cluster-cidr 参数值一致;

8.3安装Flannel

[[email protected] _src]# ansible all -m shell -a "mkdir /home/work/_app/k8s/kubernetes/{bin,cfg,ssl} -p"
[[email protected] _src]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[[email protected] _src]# tar -xvf flannel-v0.11.0-linux-amd64.tar.gz
[[email protected] _src]# mv flanneld mk-docker-opts.sh /home/work/_app/k8s/kubernetes/bin/

mk-docker-opts.sh 脚本将分配给 Flanneld 的 Pod 子网网段信息写入 /run/flannel/docker 文件,后续 Docker 启动时 使用这个文件中的环境变量配置 docker0 网桥. Flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口;

8.4配置flannel

创建 /home/work/_app/k8s/kubernetes/cfg/flanneld 文件并保存,写入以下内容:

[[email protected] _src]# cat << EOF | tee /home/work/_app/k8s/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://10.0.0.10:2379,https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 -etcd-cafile=/home/work/_app/k8s/etcd/ssl/ca.pem -etcd-certfile=/home/work/_app/k8s/etcd/ssl/server.pem -etcd-keyfile=/home/work/_app/k8s/etcd/ssl/server-key.pem"
EOF

8.5创建Flannel系统启动文件

创建 /usr/lib/systemd/system/flanneld.service 文件并保存,内容如下:

[[email protected] _src]# cat << EOF | tee /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/home/work/_app/k8s/kubernetes/cfg/flanneld
ExecStart=/home/work/_app/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/home/work/_app/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

8.6配置 Docker 启动指定子网段

编辑 /usr/lib/systemd/system/docker.service 文件,内容如下:

[[email protected] _src]# cat << EOF | tee /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# 加入环境变量的配件文件,并在 ExecStart 附加参数
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF

8.7分发flannel相关文件到其他节点

[[email protected] _src]# ansible other -m copy -a "src=/home/work/_app/k8s/kubernetes dest=/home/work/_app/k8s/ mode=0744"
[[email protected] _src]# ansible other  -a "ls /home/work/_app/k8s/kubernetes"
[[email protected] _src]# ansible other -m copy -a "src=/usr/lib/systemd/system/flanneld.service dest=/usr/lib/systemd/system/"
[[email protected] _src]# ansible other  -a "ls /usr/lib/systemd/system/flanneld.service"
[[email protected] _src]# ansible other -m copy -a "src=/usr/lib/systemd/system/docker.service dest=/usr/lib/systemd/system/"
[[email protected] _src]# ansible other  -a "ls /usr/lib/systemd/system/docker.service"

8.8启动flannel服务

ansible all -m shell -a "systemctl daemon-reload && systemctl stop docker && systemctl enable flanneld && systemctl start flanneld && systemctl start docker && systemctl status flanneld && systemctl status docker"

注意: <!--启动 Flannel 前要关闭 Docker 及相关的 kubelet 这样 Flannel 才会覆盖 docker0 网桥-->

8.9验证flannel

[[email protected] _src]# ansible all -m raw -a "cat /run/flannel/subnet.env && ip a|egrep -A 2 flannel"

查看网段是否符合预期

9.安装kubernetes

9.1生成 Kubernetes 证书请求的JSON请求文件

[[email protected] ~]# cd /home/work/_app/k8s/kubernetes/ssl/
[[email protected] ssl]# cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "server": {
        "usages": [
          "signing",
          "key encipherment",
          "server auth"
        ],
        "expiry": "8760h"
      },
      "client": {
        "usages": [
          "signing",
          "key encipherment",
          "client auth"
        ],
        "expiry": "8760h"
      }
    }
  }
}
EOF
#生成 ca-config.json

9.2生成 Kubernetes CA 配置文件和证书

[[email protected] ssl]# cat << EOF | tee ca-csr.json
{
    "CN": "kubernetes CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
#生成 ca-csr.json

初始化一个 Kubernetes CA 证书

[[email protected] ssl]# ls
ca-config.json  ca-csr.json
[[email protected] ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2019/11/03 17:31:20 [INFO] generating a new CA key and certificate from CSR
2019/11/03 17:31:20 [INFO] generate received request
2019/11/03 17:31:20 [INFO] received CSR
2019/11/03 17:31:20 [INFO] generating key: rsa-2048
2019/11/03 17:31:21 [INFO] encoded CSR
2019/11/03 17:31:21 [INFO] signed certificate with serial number 363442360986653592971594241252855429808195119806
[[email protected] ssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem
#生成 ca.csr ca-key.pem  ca.pem

9.3生成 Kube API Server 配置文件和证书

[[email protected] ssl]# cat << EOF | tee kube-apiserver-server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "10.0.0.2",
      "10.0.0.10",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "API Server"
        }
    ]
}
EOF
# 生成 kube-apiserver-server-csr.json

生成 kube-apiserver 证书

[[email protected] ssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  kube-apiserver-server-csr.json
[[email protected] ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kube-apiserver-server-csr.json | cfssljson -bare kube-apiserver-server
2019/11/03 17:41:39 [INFO] generate received request
2019/11/03 17:41:39 [INFO] received CSR
2019/11/03 17:41:39 [INFO] generating key: rsa-2048
2019/11/03 17:41:39 [INFO] encoded CSR
2019/11/03 17:41:39 [INFO] signed certificate with serial number 40258078055292579176801571476092842140605641659
2019/11/03 17:41:39 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[[email protected] ssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  kube-apiserver-server.csr  kube-apiserver-server-csr.json  kube-apiserver-server-key.pem  kube-apiserver-server.pem
# 生成kube-apiserver-server.csr kube-apiserver-server-key.pem  kube-apiserver-server.pem

9.4生成 kubelet client 配置文件和证书

创建证书配置文件

[[email protected] ssl]#  cat << EOF | tee kubelet-client-csr.json
{
  "CN": "kubelet",
  "hosts": [""],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "k8s",
      "OU": "Kubelet",
      "ST": "Beijing"
    }
  ]
}
EOF

生成 kubelet client证书

[[email protected] ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kubelet-client-csr.json | cfssljson -bare kubelet-client
2019/11/03 17:54:22 [INFO] generate received request
2019/11/03 17:54:22 [INFO] received CSR
2019/11/03 17:54:22 [INFO] generating key: rsa-2048
2019/11/03 17:54:22 [INFO] encoded CSR
2019/11/03 17:54:22 [INFO] signed certificate with serial number 584708907544786851775012737222915080474397058953
2019/11/03 17:54:22 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[[email protected] ssl]# ls
ca-config.json  ca-csr.json  ca.pem                     kube-apiserver-server-csr.json  kube-apiserver-server.pem  kubelet-client-csr.json  kubelet-client.pem
ca.csr          ca-key.pem   kube-apiserver-server.csr  kube-apiserver-server-key.pem   kubelet-client.csr         kubelet-client-key.pem
#生成 kube-apiserver-server-key.pem   kubelet-client.csr         kubelet-client-key.pem

9.5生成 Kube-Proxy 配置文件和证书

  创建证书配置文件

[[email protected] ssl]# cat << EOF | tee kube-proxy-client-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [""],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "k8s",
      "OU": "System",
      "ST": "Beijing"
    }
  ]
}
EOF
# 生成 kube-proxy-client-csr.json

 生成 Kube-Proxy 证书

[[email protected] ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-client-csr.json | cfssljson -bare kube-proxy-client
2019/11/04 11:28:01 [INFO] generate received request
2019/11/04 11:28:01 [INFO] received CSR
2019/11/04 11:28:01 [INFO] generating key: rsa-2048
2019/11/04 11:28:01 [INFO] encoded CSR
2019/11/04 11:28:01 [INFO] signed certificate with serial number 67285254948566587248810360676603036529501371119
2019/11/04 11:28:01 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[[email protected] ssl]# ls
ca-config.json  ca-csr.json  ca.pem                     kube-apiserver-server-csr.json  kube-apiserver-server.pem  kubelet-client-csr.json  kubelet-client.pem     kube-proxy-client-csr.json  kube-proxy-client.pem
ca.csr          ca-key.pem   kube-apiserver-server.csr  kube-apiserver-server-key.pem   kubelet-client.csr         kubelet-client-key.pem   kube-proxy-client.csr  kube-proxy-client-key.pem
#生成   kubelet-client-key.pem   kube-proxy-client.csr  kube-proxy-client-key.pem

9.6生成 kubectl 管理员配置文件和证书

  创建 kubectl 管理员证书配置文件

[[email protected] ssl]# cat << EOF | tee kubernetes-admin-user.csr.json
{
  "CN": "admin",
  "hosts": [""],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "k8s",
      "OU": "Cluster Admins",
      "ST": "Beijing"
    }
  ]
}
EOF

生成 kubectl 管理员证书

[[email protected] ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kubernetes-admin-user.csr.json | cfssljson -bare kubernetes-admin-user
2019/11/04 11:30:17 [INFO] generate received request
2019/11/04 11:30:17 [INFO] received CSR
2019/11/04 11:30:17 [INFO] generating key: rsa-2048
2019/11/04 11:30:17 [INFO] encoded CSR
2019/11/04 11:30:17 [INFO] signed certificate with serial number 177011301186637544930154578732184051651945219290
2019/11/04 11:30:17 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[[email protected] ssl]# ls
ca-config.json  ca-key.pem                 kube-apiserver-server-csr.json  kubelet-client.csr       kubelet-client.pem          kube-proxy-client-key.pem  kubernetes-admin-user.csr.json
ca.csr          ca.pem                     kube-apiserver-server-key.pem   kubelet-client-csr.json  kube-proxy-client.csr       kube-proxy-client.pem      kubernetes-admin-user-key.pem
ca-csr.json     kube-apiserver-server.csr  kube-apiserver-server.pem       kubelet-client-key.pem   kube-proxy-client-csr.json  kubernetes-admin-user.csr  kubernetes-admin-user.pem
#生成 kubernetes-admin-user.csr kubernetes-admin-user-key.pem kubernetes-admin-user.pem

9.7将相关证书复制到 Kubernetes Node 节点

[[email protected] ssl]# ansible other -m copy -a "src=/home/work/_app/k8s/kubernetes/ssl/ dest=/home/work/_app/k8s/kubernetes/ssl/"
[[email protected] ssl]# ansible all -m shell -a "ls -lrt /home/work/_app/k8s/kubernetes/ssl/"

10.部署kubernetes master

 Kubernetes Master 节点运行如下组件:

  • APIServer APIServer负责对外提供RESTful的kubernetes API的服务,它是系统管理指令的统一接口,任何对资源的增删该查都要交给APIServer处理后再交给etcd,如图,kubectl(kubernetes提供的客户端工具,该工具内部是对kubernetes API的调用)是直接和APIServer交互的。
  • Schedule schedule负责调度Pod到合适的Node上,如果把scheduler看成一个黑匣子,那么它的输入是pod和由多个Node组成的列表,输出是Pod和一个Node的绑定。 kubernetes目前提供了调度算法,同样也保留了接口。用户根据自己的需求定义自己的调度算法。
  • Controller manager 如果APIServer做的是前台的工作的话,那么controller manager就是负责后台的。每一个资源都对应一个控制器。而control manager就是负责管理这些控制器的,比如我们通过APIServer创建了一个Pod,当这个Pod创建成功后,APIServer的任务就算完成了。
  • ETCD etcd是一个高可用的键值存储系统,kubernetes使用它来存储各个资源的状态,从而实现了Restful的API。
  • Flannel 默认没有flanneld网络,Node节点间的pod不能通信,只能Node内通信,Flannel从etcd中获取network的配置信息 划分subnet,并在etcd中进行注册 将子网信息记录

kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式。

10.1、下载文件并安装 Kubernetes Server

[[email protected] ssl]# cd /home/work/_src/
[[email protected] _src]# wget https://dl.k8s.io/v1.13.0/kubernetes-server-linux-amd64.tar.gz
[[email protected] _src]# tar -xzvf kubernetes-server-linux-amd64.tar.gz
[[email protected] _src]# ansible all -m shell -a "mkdir /home/work/_src/kubernetes/server/bin/ -p"
[[email protected] bin]# cp kube-scheduler kube-apiserver kube-controller-manager kubectl kubelet kube-proxy /home/work/_app/k8s/kubernetes/bin/

10.2 分发文件到其他节点

kubelet kube-proxy kubectl

[[email protected] bin]#  ansible other -m copy  -a "src=/home/work/_app/k8s/kubernetes/bin/kubelet  dest=/home/work/_app/k8s/kubernetes/bin/ mode=0744"
[[email protected] bin]#  ansible other -m copy  -a "src=/home/work/_app/k8s/kubernetes/bin/kube-proxy  dest=/home/work/_app/k8s/kubernetes/bin/ mode=0744"
[[email protected] bin]#  ansible other -m copy  -a "src=/home/work/_app/k8s/kubernetes/bin/kubectl  dest=/home/work/_app/k8s/kubernetes/bin/ mode=0744"

10.3部署apiserver

创建 TLS Bootstrapping Token

[[email protected] bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘
f03e0561ac6ff77cdd835693f557788d

这里我们生成的随机Token是f03e0561ac6ff77cdd835693f557788d,记下来后面要用到

[[email protected] bin]# cat /home/work/_app/k8s/kubernetes/cfg/token-auth-file
f03e0561ac6ff77cdd835693f557788d,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

10.3.1创建 Apiserver 配置文件

  创建 /home/work/_app/k8s/kubernetes/cfg/kube-apiserver 文件并保存,内容如下:

[[email protected] bin]# cat /home/work/_app/k8s/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true --v=4 --etcd-servers=https://10.0.0.10:2379,https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 --bind-address=10.0.0.10 --secure-port=6443 --advertise-address=10.0.0.10 --allow-privileged=true --service-cluster-ip-range=10.244.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/home/work/_app/k8s/kubernetes/cfg/token-auth-file --service-node-port-range=30000-50000 --tls-cert-file=/home/work/_app/k8s/kubernetes/ssl/kube-apiserver-server.pem  --tls-private-key-file=/home/work/_app/k8s/kubernetes/ssl/kube-apiserver-server-key.pem --client-ca-file=/home/work/_app/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/home/work/_app/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/home/work/_app/k8s/etcd/ssl/ca.pem --etcd-certfile=/home/work/_app/k8s/etcd/ssl/server.pem --etcd-keyfile=/home/work/_app/k8s/etcd/ssl/server-key.pem"

10.3.2创建 Apiserver 启动文件

创建 /usr/lib/systemd/system/kube-apiserver.service 文件并保存,内容如下:

[[email protected] bin]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/home/work/_app/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/home/work/_app/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

10.2.3启动 Kube Apiserver 服务

[[email protected] bin]# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver && systemctl status kube-apiserver

10.4部署 Scheduler

创建 /home/work/_app/k8s/kubernetes/cfg/kube-scheduler 文件并保存,内容如下:

[[email protected] bin]#  cat /home/work/_app/k8s/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"

10.4.1创建 Kube-scheduler 系统启动文件

创建 /usr/lib/systemd/system/kube-scheduler.service 文件并保存,内容如下:

[[email protected] bin]#  cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/home/work/_app/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/home/work/_app/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

10.4.2启动 Kube-scheduler 服务

[[email protected] bin]# systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler && systemctl status kube-scheduler

10.5部署 Kube-Controller-Manager 组件

10.5.1创建 kube-controller-manager 配置文件

 创建 /home/work/_app/k8s/kubernetes/cfg/kube-controller-manager 文件并保存,内容如下:

[[email protected] bin]# cat /home/work/_app/k8s/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.244.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/home/work/_app/k8s/kubernetes/ssl/ca.pem --cluster-signing-key-file=/home/work/_app/k8s/kubernetes/ssl/ca-key.pem  --root-ca-file=/home/work/_app/k8s/kubernetes/ssl/ca.pem --service-account-private-key-file=/home/work/_app/k8s/kubernetes/ssl/ca-key.pem"

10.5.2创建 kube-controller-manager 系统启动文件

创建 /usr/lib/systemd/system/kube-controller-manager.service 文件并保存,内容如下

[[email protected] bin]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/home/work/_app/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/home/work/_app/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

10.5.3启动 kube-controller-manager 服务

[[email protected] bin]# systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager && systemctl status kube-controller-manager

10.6验证API-Server服务

kubectl 加入到$PATH变量中

[[email protected] bin]# echo "PATH=/home/work/_app/k8s/kubernetes/bin:$PATH:$HOME/bin" >> /etc/profile
[[email protected] bin]# source /etc/profile

查看节点状态

[[email protected] ~]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/etcd-2               Healthy   {"health":"true"}
componentstatus/controller-manager   Healthy   ok
componentstatus/scheduler            Healthy   ok
componentstatus/etcd-1               Healthy   {"health":"true"}
componentstatus/etcd-3               Healthy   {"health":"true"}
componentstatus/etcd-0               Healthy   {"health":"true"}

10.7部署kubelet

10.7.1创建 bootstrap.kubeconfig、kube-proxy.kubeconfig 配置文件

创建 /home/work/_app/k8s/kubernetes/cfg/env.sh 文件并保存,内容如下:

[[email protected] ~]# cd /home/work/_app/k8s/kubernetes/cfg/
[[email protected] ~]# cat  env.sh
#!/bin/bash
#创建kubelet bootstrapping kubeconfig
BOOTSTRAP_TOKEN=f03e0561ac6ff77cdd835693f557788d
KUBE_APISERVER="https://10.0.0.10:6443"
#设置集群参数
kubectl config set-cluster kubernetes   --certificate-authority=/home/work/_app/k8s/kubernetes/ssl/ca.pem   --embed-certs=true   --server=${KUBE_APISERVER}   --kubeconfig=bootstrap.kubeconfig

#设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap   --token=${BOOTSTRAP_TOKEN}   --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default   --cluster=kubernetes   --user=kubelet-bootstrap   --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes   --certificate-authority=/home/work/_app/k8s/kubernetes/ssl/ca.pem   --embed-certs=true   --server=${KUBE_APISERVER}   --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy   --client-certificate=/home/work/_app/k8s/kubernetes/ssl/kube-proxy-client.pem   --client-key=/home/work/_app/k8s/kubernetes/ssl/kube-proxy-client-key.pem   --embed-certs=true   --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default   --cluster=kubernetes   --user=kube-proxy   --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
BOOTSTRAP_TOKEN`使用在创建 `TLS Bootstrapping Token` 生成的`f03e0561ac6ff77cdd835693f557788d`

  执行脚本:

[[email protected] cfg]# sh env.sh
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[[email protected] cfg]# ls
bootstrap.kubeconfig  env.sh  flanneld  kube-apiserver  kube-controller-manager  kube-proxy.kubeconfig  kube-scheduler  token-auth-file
# 执行脚本生成 bootstrap.kubeconfig   kube-proxy.kubeconfig

  将 bootstrap.kubeconfigkube-proxy.kubeconfig 复制到其他节点

[[email protected] cfg]# ansible other -m copy -a "src=/home/work/_app/k8s/kubernetes/cfg/kube-proxy.kubeconfig dest=/home/work/_app/k8s/kubernetes/cfg/"
[[email protected] cfg]# ansible other -m copy -a "src=/home/work/_app/k8s/kubernetes/cfg/bootstrap.kubeconfig  dest=/home/work/_app/k8s/kubernetes/cfg/"
[[email protected] cfg]# ansible other -m shell -a "ls /home/work/_app/k8s/kubernetes/cfg/bootstrap.kubeconfig"
[[email protected] cfg]# ansible other -m shell -a "ls /home/work/_app/k8s/kubernetes/cfg/kube-proxy.kubeconfig"

10.7.2创建 kubelet 配置文件

创建 /home/work/_app/k8s/kubernetes/cfg/kubelet.config 参数配置文件并保存,内容如下:

[[email protected] cfg]# cat /home/work/_app/k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.0.0.10
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.244.0.1"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

创建 /home/work/_app/k8s/kubernetes/cfg/kubelet 启动参数文件并保存,内容如下:

[[email protected] cfg]# cat /home/work/_app/k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true --v=4 --hostname-override=10.0.0.10 --kubeconfig=/home/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/home/work/_app/k8s/kubernetes/cfg/bootstrap.kubeconfig --config=/home/work/_app/k8s/kubernetes/cfg/kubelet.config --cert-dir=/home/work/_app/k8s/kubernetes/ssl_cert --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

kubelet 启动时,如果通过 --kubeconfig 指定的文件不存在,则通过 --bootstrap-kubeconfig 指定的 bootstrap kubeconfig 用于从API服务器请求客户端证书。 在通过 kubelet 批准证书请求时,引用生成的密钥和证书将放在 --cert-dir 目录中。

  

10.7.3、将 kubelet-bootstrap 用户绑定到系统集群角色

[[email protected] cfg]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

10.7.4、创建 kubelet 系统启动文件

  创建 /usr/lib/systemd/system/kubelet.service 并保存,内容如下:

[[email protected] cfg]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/home/work/_app/k8s/kubernetes/cfg/kubelet
ExecStart=/home/work/_app/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

10.7.5启动kubelet服务

[[email protected] cfg]# systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet &&systemctl status kubelet

10.8批准 Master 加入集群

CSR 可以在内置批准流程之外做手动批准加入集群。 管理员也可以使用 kubectl 手动批准证书请求。 管理员可以使用 kubectl get csr 列出 CSR 请求, 并使用 kubectl describe csr 列出详细描述。 管理员也可以使用 kubectl certificate approvekubectl certificate deny 工具批准或拒绝 CSR 请求。

10.8.1查看 CSR 列表

[[email protected] cfg]# kubectl get csr
NAME                                                   AGE     REQUESTOR        CONDITION
node-csr-JxDAOeYmXgaOHWVraQT7ScZ2zt0mj52cC9vA-4WZMUA   2m11s   kubelet-bootstrap  Pending

10.8.2审批加入集群

[[email protected] cfg]# kubectl certificate approve node-csr-JxDAOeYmXgaOHWVraQT7ScZ2zt0mj52cC9vA-4WZMUA
certificatesigningrequest.certificates.k8s.io/node-csr-JxDAOeYmXgaOHWVraQT7ScZ2zt0mj52cC9vA-4WZMUA approved

10.8.3验证master是否加入集群

 再次查看 CSR 列表

[[email protected] cfg]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-JxDAOeYmXgaOHWVraQT7ScZ2zt0mj52cC9vA-4WZMUA   5m26s   kubelet-bootstrap   Approved,Issued

10.9部署 kube-proxy 组件

kube-proxy 运行在所有 Node 节点上,它监听 apiserverserviceEndpoint 的变化情况,创建路由规则来进行服务负载均衡,以下操作以 k8s-master01 为例

10.9.1创建 kube-proxy 参数配置文件

  创建 /home/work/_app/k8s/kubernetes/cfg/kube-proxy 配置文件并保存,内容如下:

[[email protected] cfg]# cat /home/work/_app/k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true --v=4 --hostname-override=10.0.0.10 --cluster-cidr=10.244.0.0/16 --kubeconfig=/home/work/_app/k8s/kubernetes/cfg/kube-proxy.kubeconfig"

--hostname-override在不同的节点处,要换成节点的IP

10.9.2创建 kube-proxy 系统启动文件

  创建 /usr/lib/systemd/system/kube-proxy.service 文件并保存,内容如下:

[[email protected] cfg]# cat  /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/home/work/_app/k8s/kubernetes/cfg/kube-proxy
ExecStart=/home/work/_app/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

10.9.3启动kube-proxy

[[email protected] cfg]# systemctl daemon-reload && systemctl enable kube-proxy &&  systemctl start kube-proxy && systemctl status kube-proxy

11.验证 Server 服务

[[email protected] cfg]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/etcd-1               Healthy   {"health":"true"}
componentstatus/etcd-3               Healthy   {"health":"true"}
componentstatus/etcd-0               Healthy   {"health":"true"}
componentstatus/scheduler            Healthy   ok
componentstatus/etcd-2               Healthy   {"health":"true"}
componentstatus/controller-manager   Healthy   ok                  

NAME             STATUS   ROLES    AGE     VERSION
node/10.0.0.10   Ready    <none>   8m12s   v1.13.0

12.Kubernetes Node 节点加入集群

Kubernetes Node 节点运行如下组件:

  • Proxy: 该模块实现了kubernetes中的服务发现和反向代理功能。kube-proxy支持TCP和UDP连接转发,默认基Round Robin算法将客户端流量转发到与service对应的一组后端pod。服务发现方面,kube-proxy使用etcd的watch机制监控集群中service和endpoint对象数据的动态变化,并且维护一个service到endpoint的映射关系,从而保证了后端pod的IP变化不会对访问者造成影响,另外,kube-proxy还支持session affinity。
  • Kublet kublet是Master在每个Node节点上面的agent,是Node节点上面最重要的模块,它负责维护和管理该Node上的所有容器,但是如果容器不是通过kubernetes创建的,它并不会管理。本质上,它负责使Pod的运行状态与期望的状态一致。 kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况; 为确保安全,只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如apiserver、heapster)
  • Flannel 默认没有flanneld网络,Node节点间的pod不能通信,只能Node内通信,Flannel从etcd中获取network的配置信息 划分subnet,并在etcd中进行注册 将子网信息记录
  • ETCD ETCD是一个高可用的键值存储系统,kubernetes使用它来存储各个资源的状态,从而实现了Restful的API。

12.1创建 kubelet 配置文件

在k8s-master02 :   创建 /home/work/_app/k8s/kubernetes/cfg/kubelet.config 参数配置文件并保存,内容如下:

[[email protected] ~]# cat /home/work/_app/k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.0.0.11
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.244.0.1"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

在k8s-node01:   创建 /home/work/_app/k8s/kubernetes/cfg/kubelet.config 参数配置文件并保存,内容如下:

[[email protected] ~]# cat /home/work/_app/k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.0.0.12
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.244.0.1"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

在k8s-node02 :   创建 /home/work/_app/k8s/kubernetes/cfg/kubelet.config 参数配置文件并保存,内容如下:

[[email protected] ~]# cat /home/work/_app/k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.0.0.13
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.244.0.1"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

创建 /home/work/_app/k8s/kubernetes/cfg/kubelet 启动参数文件并保存,内容如下:

[[email protected] ~]# cat /home/work/_app/k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true --v=4 --hostname-override=10.0.0.11 --kubeconfig=/home/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/home/work/_app/k8s/kubernetes/cfg/bootstrap.kubeconfig --config=/home/work/_app/k8s/kubernetes/cfg/kubelet.config --cert-dir=/home/work/_app/k8s/kubernetes/ssl_cert --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

--hostname-override 这个参数分别替换成各自节点的ip

12.2kubelet启动文件

[[email protected] ~]# ansible other -m copy -a "src=/usr/lib/systemd/system/kubelet.service dest=/usr/lib/systemd/system/"

[[email protected] ~]# ansible other -m shell -a "ls /usr/lib/systemd/system/kubelet.service"

12.3分别启动kubelet

[[email protected] ~]#ansible all -m shell -a  "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet" |grep running

12.4批准 Node 加入集群

  查看 CSR 列表,可以看到节点有 Pending 请求

[[email protected] ~]# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-JxDAOeYmXgaOHWVraQT7ScZ2zt0mj52cC9vA-4WZMUA   30m    kubelet-bootstrap   Approved,Issued
node-csr-MrvfF1bll1SSDFHHOs5bx4Xw3peQ3vz_mNYdeY6hrzA   114s   kubelet-bootstrap   Pending
node-csr-Tl7t4oF9mp_obWTf5F_55whMrcWStRc3f4TYJtKNTVU   114s   kubelet-bootstrap   Pending
node-csr-VSttFy1WE5JiBIsfUdKTuCi-EA_DdkLihE1csGCbcXE   114s   kubelet-bootstrap   Pending

通过以下命令,查看请求的详细信息,能够看到是 k8s-master02 的IP地址10.0.0.101发来的请求

[[email protected] ~]# kubectl describe csr node-csr-MrvfF1bll1SSDFHHOs5bx4Xw3peQ3vz_mNYdeY6hrzA
Name:               node-csr-MrvfF1bll1SSDFHHOs5bx4Xw3peQ3vz_mNYdeY6hrzA
Labels:             <none>
Annotations:        <none>
CreationTimestamp:  Mon, 04 Nov 2019 20:05:08 +0800
Requesting User:    kubelet-bootstrap
Status:             Pending
Subject:
         Common Name:    system:node:10.0.0.11
         Serial Number:
         Organization:   system:nodes
Events:  <none>

审批pendding的节点加入集群

[[email protected] ~]# kubectl certificate approve node-csr-MrvfF1bll1SSDFHHOs5bx4Xw3peQ3vz_mNYdeY6hrzA
certificatesigningrequest.certificates.k8s.io/node-csr-MrvfF1bll1SSDFHHOs5bx4Xw3peQ3vz_mNYdeY6hrzA approved
[[email protected] ~]# kubectl certificate approve node-csr-Tl7t4oF9mp_obWTf5F_55whMrcWStRc3f4TYJtKNTVU
certificatesigningrequest.certificates.k8s.io/node-csr-Tl7t4oF9mp_obWTf5F_55whMrcWStRc3f4TYJtKNTVU approved
[[email protected] ~]# kubectl certificate approve node-csr-VSttFy1WE5JiBIsfUdKTuCi-EA_DdkLihE1csGCbcXE
certificatesigningrequest.certificates.k8s.io/node-csr-VSttFy1WE5JiBIsfUdKTuCi-EA_DdkLihE1csGCbcXE approved

再次查看 CSR 列表,可以看到节点的加入请求已经被批准

[[email protected] ~]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-JxDAOeYmXgaOHWVraQT7ScZ2zt0mj52cC9vA-4WZMUA   35m     kubelet-bootstrap   Approved,Issued
node-csr-MrvfF1bll1SSDFHHOs5bx4Xw3peQ3vz_mNYdeY6hrzA   6m59s   kubelet-bootstrap   Approved,Issued
node-csr-Tl7t4oF9mp_obWTf5F_55whMrcWStRc3f4TYJtKNTVU   6m59s   kubelet-bootstrap   Approved,Issued
node-csr-VSttFy1WE5JiBIsfUdKTuCi-EA_DdkLihE1csGCbcXE   6m59s   kubelet-bootstrap   Approved,Issued

13.从集群剔除node

要删除一个节点前,要先清除掉上面的 pod 然后运行下面的命令删除节点

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

 如果想要有效删除节点,在节点启动时,重新向集群发送 CSR 请求,还需要在被删除的点节上,删除 CSR 缓存数据

[[email protected] ~]# ls /home/work/_app/k8s/kubernetes/ssl_cert/
kubelet-client-2019-11-04-20-11-09.pem  kubelet-client-current.pem  kubelet.crt  kubelet.key
[[email protected] ~]# rm -fr /home/work/_app/k8s/kubernetes/ssl_cert/*

删除完 CSR 缓存数据以后,重启启动 kubelet 就可以在 Master 上收到新的 CSR 请求。

14.给node打标签

查看所有节点状态 ,没打标之前role值默认为none

[[email protected] ~]# kubectl get node
NAME        STATUS   ROLES    AGE   VERSION
10.0.0.10   Ready    <none>   13h   v1.13.0
10.0.0.11   Ready    <none>   13h   v1.13.0
10.0.0.12   Ready    <none>   13h   v1.13.0
10.0.0.13   Ready    <none>   13h   v1.13.0

k8s-master01Master 打标签

[[email protected] ~]# kubectl label node 10.0.0.10 node-role.kubernetes.io/master=‘master‘
node/10.0.0.10 labeled

k8s-master02 的 Node打标签

[[email protected] ~]# kubectl label node 10.0.0.11 node-role.kubernetes.io/master=‘k8s-master02‘
node/10.0.0.11 labeled
[[email protected] ~]# kubectl label node 10.0.0.11 node-role.kubernetes.io/node=‘k8s-master02‘
node/10.0.0.11 labeled
[[email protected] ~]# kubectl label node 10.0.0.12 node-role.kubernetes.io/node=‘k8s-node01‘
node/10.0.0.12 labeled
[[email protected] ~]# kubectl label node 10.0.0.13 node-role.kubernetes.io/node=‘k8s-node02‘
node/10.0.0.13 labeled

[[email protected] ~]# kubectl get node
NAME        STATUS   ROLES         AGE   VERSION
10.0.0.10   Ready    master        13h   v1.13.0
10.0.0.11   Ready    master,node   13h   v1.13.0
10.0.0.12   Ready    node          13h   v1.13.0
10.0.0.13   Ready    node          13h   v1.13.0

删除掉 k8s-master02 上的 master 标签

[[email protected] ~]# kubectl label node 10.0.0.11 node-role.kubernetes.io/master-
node/10.0.0.11 labeled
[[email protected] ~]# kubectl get node
NAME        STATUS   ROLES    AGE   VERSION
10.0.0.10   Ready    master   13h   v1.13.0
10.0.0.11   Ready    node     13h   v1.13.0
10.0.0.12   Ready    node     13h   v1.13.0
10.0.0.13   Ready    node     13h   v1.13.0

原文地址:https://www.cnblogs.com/benjamin77/p/11875737.html

时间: 2024-07-30 19:51:39

k8s二进制部署的相关文章

K8S二进制部署master节点

在完成前面的K8S基础组件配置之后,我们就可以正式开始K8S的部署工作.本文介绍在k8s master组件的二进制部署过程,由于环境为内网开发和测试环境,所以仅考虑etcd组件的高可用,api-server.controller-manager和scheduler的高可用暂不考虑,后续可以使用keepalive的方式实现. 一.软件包下载地址Server包: https://dl.k8s.io/v1.9.6/kubernetes-server-linux-amd64.tar.gz 二.部署mas

Kubernetes二进制部署之单节点部署

K8s单节点二进制部署 K8s二进制部署搭建步骤: 1:自签ETCD证书 2:ETCD部署 3:Node安装docker 4:Flannel部署(先写入子网到etcd) ---------master---------- 5:自签APIServer证书 6:部署APIServer组件(token,csv) 7:部署controller-manager(指定apiserver证书)和scheduler组件 ----------node---------- 8:生成kubeconfig(bootst

k8s实践19:kubernetes二进制部署集群v1.12升级v1.15

1.升级前的版本 [[email protected] ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDa

[转贴]CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群

CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群 http://blog.51cto.com/10880347/2326146 一.概述 kubernetes 1.13 已发布,这是 2018 年年内第四次也是最后一次发布新版本.Kubernetes 1.13 是迄今为止发布间隔最短的版本之一(与上一版本间隔十周),主要关注 Kubernetes 的稳定性与可扩展性,其中存储与集群生命周期相关的三项主要功能已逐步实现普遍可用性. Kubernetes 1.13 的核心

K8S二进制安装

k8s二进制安装先记录下安装步骤 部署前所有节点关闭firewalld(systemctl stop firewalld),并同步互联网时间.1.自签ETCD证书2.ETCD部署3.Node安装Docker4.Flannel部署(先写入子网到etcd)5.自签APIServer证书6.部署APIServer组件(token.csv)7.部署controller-manager(指定apiserver证书)和scheduler组件8.生成kubeconfig(bootstrap.kubeconfi

来了,老弟!__二进制部署kubernetes1.11.7集群

Kubernetes容器集群管理 Kubernetes介绍 Kubernetes是Google在2014年6月开源的一个容器集群管理系统,使用Go语言开发,Kubernetes也叫K8S.K8S是Google内部一个叫Borg的容器集群管理系统衍生出来的,Borg已经在Google大规模生产运行十年之久.K8S主要用于自动化部署.扩展和管理容器应用,提供了资源调度.部署管理.服务发现.扩容缩容.监控等一整套功能.2015年7月,Kubernetes v1.0正式发布.Kubernetes目标是让

二进制部署 Kubernetes 集群

二进制部署 Kubernetes 集群 提供的几种Kubernetes部署方式 minikube Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用.不能用于生产环境. kubeadm Kubeadm也是一个工具,提供kubeadm init和kubeadm join指令,用于快速部署Kubernetes集群. 二进制包 从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群. 小结:生产环境中部署Kub

Kubernetes群集之:二进制部署单etcd,多节点集群

Kubernetes集群部署 1.官方提供的三种部署方式2.Kubernetes平台环境规划3.自签SSL证书4.Etcd数据库群集部署 5.Node安装Docker6.Flannel容器集群网络部署7.部署Master组件8.部署Node组件9.部署一个测试示例10.部署Web UI(Dashboard)11.部署集群内部DNS解析服务(CoreDNS) 官方提供的三种部署方式: minikube: Minikube是一个工具,可以在本地快速运行单点的Kubernetes,仅用于尝试Kuber

持续集成之应用k8s自动部署

原文:持续集成之应用k8s自动部署 持续集成之应用k8s自动部署 Intro 上次我们提到了docker容器化及自动化部署,这仅仅适合个人项目或者开发环境部署,如果要部署到生产环境,必然就需要考虑很多因素,比如访问量大了如何调整部署,如何更好的应对大并发的情况,如何不停机更新应用,如果想要将Docker应用于具体的业务实现,是存在困难的--编排.管理和调度等各个方面,都不容易.于是,人们迫切需要一套管理系统,对Docker及容器进行更高级更灵活的管理,于是 k8s 就出现了. K8S,就是基于容