kubernetes 1.11 部署
参考文档文档编译
[https://github.com/kubernetes/kubernetes/tree/release-1.11/cluster/images/hyperkube]()
参考文档文档编译
环境安装
1. docker install
所有节点都按照docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
安装docker-ce-17.03.2.ce 不能直接yum安装,需要安装docker-ce-selinux-17.03.2.ce
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
yum install docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm docker-ce-17.03.2.ce -y
mkdir /etc/docker/
vim /etc/docker/daemon.json
{
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
启动docker
systemctl restart docker
2. 安装golang
10.39.1.43上安装golang
wget https://dl.google.com/go/go1.10.3.linux-amd64.tar.gz
tar -C /usr/local/ -xzf go1.10.3.linux-amd64.tar.gz
vim /etc/profile
export PATH=$PATH:/usr/local/go/bin
设置GOROOT GOPATH
export GOROOT=$HOME/go1.
export PATH=$PATH:$GOROOT/bin
cd $GOPATH && mkdir {pkg,src,bin,lib}
go version
3. kubernetes 编译
下载kubernetes 源码
cd $GOPATH/src
git clone https://github.com/kubernetes/kubernetes.git
git tag
git checkout v1.11.0
git branch -v
选择本地go编译,编译二进制
KUBE_BUILD_PLATFORMS=linux/amd64 make all
版本更新说明
[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1110]()
4. 环境说明
三台master, 一台node
主机 主机名
10.39.10.160 kubernetes-node-160
10.39.10.159 kubernetes-master-159
10.39.10.156 kubernetes-master-156
10.39.10.154 kubernetes-master-154
添加hosts
系统
CentOS Linux release 7.5.1804 3.10.0-862.9.1.el7.x86_64
5. 创建证书
使用CloudFlare的pki 工具来生成CA证书
[https://github.com/cloudflare/cfssl]()
install cfssl
mkdir -p /opt/local/cfssl
cd /opt/local/cfssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
mv cfssl_linux-amd64 cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
mv cfssljson_linux-amd64 cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 cfssl-certinfo
chmod +x *
创建CA 证书
mkdir /opt/k8s/ssl
config.json 文件
vim config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
vim csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
生成CA 证书和私钥
cfssl gencert -initca csr.json | cfssljson -bare ca
分发证书
mkdir -p /etc/kubernetes/ssl 拷贝到kubernetes所有机器上
ansible -i hosts k8s -m shell -a "mkdir /etc/kubernetes/ssl -p"
ansible -i hosts k8s -m copy -a "src=/opt/k8s/ssl dest=/etc/kubernetes/"
ETCD集群
最新的kubernetes v1.11.0 支持的etcd 版本是v3.2.18 版本
wget https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz
tar zxvf etcd-v3.2.18-linux-amd64.tar.gz
cd etcd-v3.2.18-linux-amd64
ansible -i hosts k8s -m copy -a "src=etcd-v3.2.18-linux-amd64/etcd dest=/usr/bin/"
ansible -i hosts k8s -m copy -a "src=etcd-v3.2.18-linux-amd64/etcdctl dest=/usr/bin/"
创建etcd证书
cd /opt/k8s/ssl
vim etcd-csr.json
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"10.39.10.154",
"10.39.10.156",
"10.39.10.159"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
生成etcd
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
查看证书
cfssl-certinfo -cert etcd.pem
拷贝证书到etcd 服务器
#etcd-1
scp etcd*.pem [email protected]:/etc/kubernetes/ssl/
#etcd-2
scp etcd*.pem [email protected]:/etc/kubernetes/ssl/
#etcd-3
scp etcd*.pem [email protected]:/etc/kubernetes/ssl/
如果etcd 非root 用户,读取证书会提示没有权限
chmod 644 /etc/kubernetes/ssl/etcd-key.pem
ansible -i hosts etcd -m shell -a "chmod 644 /etc/kubernetes/ssl/etcd-key.pem"
etcd 配置
创建etcd 的数据存储目录,默认是在/var/lib/etcd/
useradd etcd
mkdir /data/etcd -p
ansible -i hosts etcd -m shell -a "mkdir /data/etcd -p"
chown -R etcd:etcd /data/etcd
chmod +x /usr/bin/etcd
chmod +x /usr/bin/etcdctl
# etcd-1
vim /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/data/etcd/
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/usr/bin/etcd --name=etcd1 --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem --peer-cert-file=/etc/kubernetes/ssl/etcd.pem --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem --trusted-ca-file=/etc/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem --initial-advertise-peer-urls=https://10.39.10.154:2380 --listen-peer-urls=https://10.39.10.154:2380 --listen-client-urls=https://10.39.10.154:2379,http://127.0.0.1:2379 --advertise-client-urls=https://10.39.10.154:2379 --initial-cluster-token=k8s-etcd-cluster --initial-cluster=etcd1=https://10.39.10.154:2380,etcd2=https://10.39.10.156:2380,etcd3=https://10.39.10.159:2380 --initial-cluster-state=new --data-dir=/data/etcd/
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
# etcd 2
vim /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/data/etcd/
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/usr/bin/etcd --name=etcd2 --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem --peer-cert-file=/etc/kubernetes/ssl/etcd.pem --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem --trusted-ca-file=/etc/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem --initial-advertise-peer-urls=https://10.39.10.156:2380 --listen-peer-urls=https://10.39.10.156:2380 --listen-client-urls=https://10.39.10.156:2379,http://127.0.0.1:2379 --advertise-client-urls=https://10.39.10.156:2379 --initial-cluster-token=k8s-etcd-cluster --initial-cluster=etcd1=https://10.39.10.154:2380,etcd2=https://10.39.10.156:2380,etcd3=https://10.39.10.159:2380 --initial-cluster-state=new --data-dir=/data/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#etcd-3
vim /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/data/etcd/
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/usr/bin/etcd --name=etcd3 --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem --peer-cert-file=/etc/kubernetes/ssl/etcd.pem --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem --trusted-ca-file=/etc/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem --initial-advertise-peer-urls=https://10.39.10.159:2380 --listen-peer-urls=https://10.39.10.159:2380 --listen-client-urls=https://10.39.10.159:2379,http://127.0.0.1:2379 --advertise-client-urls=https://10.39.10.159:2379 --initial-cluster-token=k8s-etcd-cluster --initial-cluster=etcd1=https://10.39.10.154:2380,etcd2=https://10.39.10.156:2380,etcd3=https://10.39.10.159:2380 --initial-cluster-state=new --data-dir=/data/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
启动etcd
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
查看etcd 集群状态
etcdctl --endpoints=https://10.39.10.154:2379,https://10.39.10.156:2379,https://10.39.10.159:2379 --cert-file=/etc/kubernetes/ssl/etcd.pem --ca-file=/etc/kubernetes/ssl/ca.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem cluster-health
查看etcd 集群成员
etcdctl --endpoints=https://10.39.10.154:2379,https://10.39.10.156:2379,https://10.39.10.159:2379 --cert-file=/etc/kubernetes/ssl/etcd.pem --ca-file=/etc/kubernetes/ssl/ca.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem member list
配置kubernetes 集群
Master and Node
kubectl 安装在所有需要进行操作的机器上
Master: 部署的组件 kube-apiserver, kube-scheduler, kube-controller-manager 这三个组件,kube-scheduler 作用是负责资源调度
kube-controller-manager 作用是对 deployment controller, replication controller, endpoints controller, namespace controller, and serviceaccounts controller 等等的循环控制,与kube-apiserver 交互。
安装组件
将之前编译好的二进制文件,kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kubelet,kubeadm 拷贝到/usr/local/bin/
创建admin 证书
kubectl 与 kube-apiserver 的安全端口通信,需要为安全通信提供TLS 证书和秘钥
vim admin-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
生成admin 证书和秘钥
cd /opt/ssl/
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
证书拷贝到master 的机器上,154,156.159
scp ssl/admin* [email protected]:/etc/kubernetes/ssl/
...........
生成kubernetes配置文件
#生成证书相关配置文件存储于/root/.kube 目录中
配置kubernetes 集群
在10.39.10.154 这台机器上操作
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://127.0.0.1:6443
配置客户端认证
kubectl config set-credentials admin --client-certificate=/etc/kubernetes/ssl/admin.pem --embed-certs=true --client-key=/etc/kubernetes/ssl/admin-key.pem
kubectl config set-context kubernetes --cluster=kubernetes --user=admin
kubectl config use-context kubernetes
创建kubernetes 证书
vim kubernetes-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"10.39.10.154",
"10.39.10.156",
"10.39.10.159",
"10.254.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
# 这里 hosts 字段中 三个 IP 分别为 127.0.0.1 本机,10.39.10.154 和 10.39.10.156,10.39.10.159 为 Master 的IP,多个Master需要写多个。 10.254.0.1 为 kubernetes SVC 的 IP, 一般是 部署网络的第一个IP , 如: 10.254.0.1 , 在启动完成后,我们使用 kubectl get svc , 就可以查看到
生成kubernetes 证书和私钥
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
拷贝kubernetes*.pem /etc/kubernetes/ssl/
scp kubernetes*.pem [email protected]:/etc/kubernetes/ssl/
........
配置kube-apiserver
[[email protected] ssl]# head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘
da4090c6baadef99e577a9ac5da6f684
#创建encryption-config.yaml 配置
i
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: 40179b02a8f6da07d90392ae966f7749
- identity: {}
EOF
#拷贝
scp encryption-config.yaml [email protected]:/etc/kubernetes/
.......
#生成高级审核配置文件
[https://kubernetes.io/docs/tasks/debug-application-cluster/audit/]()
cat >> audit-policy.yaml <<EOF
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
- level: Metadata
EOF
拷贝到所有master 机器上
scp audit-policy.yaml [email protected]:/etc/kubernetes/
创建kube-apiserver.service 文件
# 自定义系统
# master 机器上都需要配置
vi /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
User=root
ExecStart=/usr/local/bin/kube-apiserver --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction --anonymous-auth=false --experimental-encryption-provider-config=/etc/kubernetes/encryption-config.yaml --advertise-address=10.39.10.154 --allow-privileged=true --apiserver-count=3 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/kubernetes/audit.log --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --secure-port=6443 --client-ca-file=/etc/kubernetes/ssl/ca.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes.pem --kubelet-client-key=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/etcd.pem --etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem --etcd-servers=https://10.39.10.154:2379,https://10.39.10.156:2379,https://10.39.10.159:2379 --event-ttl=1h --kubelet-https=true --insecure-bind-address=127.0.0.1 --insecure-port=8080 --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --service-cluster-ip-range=10.254.0.0/18 --service-node-port-range=30000-32000 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --enable-bootstrap-token-auth --v=1
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
启动kube-apiserver
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver
配置kube-controller-manager
vim /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager --address=0.0.0.0 --master=http://127.0.0.1:8080 --allocate-node-cidrs=true --service-cluster-ip-range=10.254.0.0/18 --cluster-cidr=10.254.64.0/18 --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --feature-gates=RotateKubeletServerCertificate=true --controllers=*,tokencleaner,bootstrapsigner --experimental-cluster-signing-duration=86700h0m0s --cluster-name=kubernetes --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true --node-monitor-grace-period=40s --node-monitor-period=5s --pod-eviction-timeout=5m0s --v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
启动kube-controller-manager
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager
配置kube-scheduler
vi /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler --address=0.0.0.0 --master=http://127.0.0.1:8080 --leader-elect=true --v=1
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
启动kube-scheduler
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler
验证master 节点
每台master 上执行kubectl get componentstatuses
配置 kubelet 认证
kubelet 授权 kube-apiserver 的一些操作 exec run logs 等
# RBAC 只需创建一次就可以
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
创建 bootstrap kubeconfig 文件
#注意: token 生效时间为 1day , 超过时间未创建自动失效,需要重新创建 token
# 创建 集群所有 kubelet 的 token
在master 上执行
kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:kubernetes-master-154 --kubeconfig ~/.kube/config
kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:kubernetes-master-156 --kubeconfig ~/.kube/config
kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:kubernetes-master-159 --kubeconfig ~/.kube/config
# 查看生成的token
kubeadm token list --kubeconfig ~/.kube/config
以下为了区分会先生成node名称加bootstrap.kubeconfig
生成kubernetes-master-154 bootstrap.kubeconfig
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=kubernetes-master-154-bootstrap.kubeconfig
#配置客户端认证
kubectl config set-credentials kubelet-bootstrap --token=aaa8j5.4nvwg82imbrzb7r2 --kubeconfig=kubernetes-master-154-bootstrap.kubeconfig
#配置关联
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubernetes-master-154-bootstrap.kubeconfig
#配置默认关键
kubectl config use-context default --kubeconfig=kubernetes-master-154-bootstrap.kubeconfig
拷贝生成的kubernetes-master-154-bootstrap.kubeconfig 文件
mv kubernetes-master-154-bootstrap.kubeconfig /etc/kubernetes/bootstrap.kubeconfig
生成kubernetes-master-156 bootstrap.kubeconfig
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=kubernetes-master-156-bootstrap.kubeconfig
#配置客户端认证
kubectl config set-credentials kubelet-bootstrap --token=rz2col.l8x1x9dg5kg7jjw6 --kubeconfig=kubernetes-master-156-bootstrap.kubeconfig
#配置关联
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubernetes-master-156-bootstrap.kubeconfig
#配置默认关联
kubectl config use-context default --kubeconfig=kubernetes-master-156-bootstrap.kubeconfig
#拷贝生成的kubernetes-master-156-bootstrap.kubeconfig
mv kubernetes-master-156-bootstrap.kubeconfig /etc/kubernetes/bootstrap.kubeconfig
# 生成159的bootstrap.kubeconfig
#配置集群参数
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=kubernetes-master-159-bootstrap.kubeconfig
#配置客户端认证
kubectl config set-credentials kubelet-bootstrap --token=9ocdef.pjd1s7twtro2ho8a --kubeconfig=kubernetes-master-159-bootstrap.kubeconfig
#配置关联
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubernetes-master-159-bootstrap.kubeconfig
#配置默认关联
kubectl config use-context default --kubeconfig=kubernetes-master-159-bootstrap.kubeconfig
#拷贝生成
mv kubernetes-master-159-bootstrap.kubeconfig /etc/kubernetes/bootstrap.kubeconfig
配置bootstrap RBAC 权限
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
创建自动批准相关CSR请求ClusterRole
vim /etc/kubernetes/tls-instructs-csr.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver
rules:
- apiGroups: ["certificates.k8s.io"]
resources: ["certificatesigningrequests/selfnodeserver"]
verbs: ["create"]
#导入yaml 文件
kubectl apply -f /etc/kubernetes/tls-instructs-csr.yaml
#查看
kubectl describe ClusterRole/system:certificates.k8s.io:certificatesigningrequests:selfnodeserver
#将ClusterRole 绑定到适当的用户组
#自动批准 system:bootstrappers 组用户 TLS bootstrapping 首次申请证书的 CSR 请求
kubectl create clusterrolebinding node-client-auto-approve-csr --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers
#自动批准 system:nodes 组用户更新 kubelet 自身与 apiserver 通讯证书的 CSR 请求
kubectl create clusterrolebinding node-client-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
#自动批准 system:nodes 组用户更新 kubelet 10250 api 端口证书的 CSR 请求
kubectl create clusterrolebinding node-server-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeserver --group=system:nodes
创建 kubelet.service 文件
关于kubectl get node 中的ROLES 的标签
单 Master 打标签 kubectl label node kubernetes-master-154 node-role.kubernetes.io/master=‘‘
这里需要将 单Master 更改为 NoSchedule
更新标签命令为 kubectl taint nodes kubernetes-64 node-role.kubernetes.io/master=:NoSchedule
既 Master 又是 node 打标签 kubectl label node kubernetes-65 node-role.kubernetes.io/master=””
单 Node 打标签 kubectl label node kubernetes-66 node-role.kubernetes.io/node=””
关于删除 label 可使用 - 号相连 如: kubectl label nodes kubernetes-65 node-role.kubernetes.io/node-
#创建kubelet.service
vi /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet --hostname-override=kubernetes-master-154 --pod-infra-container-image=jicki/pause-amd64:3.1 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/kubelet.config.json --cert-dir=/etc/kubernetes/ssl --logtostderr=true --v=2
[Install]
WantedBy=multi-user.target
# 创建 kubelet config 配置文件
vi /etc/kubernetes/kubelet.config.json
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/ssl/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "10.39.10.154",
"port": 10250,
"readOnlyPort": 0,
"cgroupDriver": "cgroupfs",
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"RotateCertificates": true,
"featureGates": {
"RotateKubeletClientCertificate": true,
"RotateKubeletServerCertificate": true
},
"MaxPods": "512",
"failSwapOn": false,
"containerLogMaxSize": "10Mi",
"containerLogMaxFiles": 5,
"clusterDomain": "cluster.local.",
"clusterDNS": ["10.254.0.2"]
}
启动kubelet
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
配置kube-proxy
创建kebe-proxy 证书
vim kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
生成kube-proxy 证书和私钥
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
拷贝到目录
scp kube-proxy* [email protected]:/etc/kubernetes/ssl/
scp kube-proxy* [email protected]:/etc/kubernetes/ssl/
scp kube-proxy* [email protected]:/etc/kubernetes/ssl/
创建kube-proxy kubeconfig 文件
#配置集群
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=kube-proxy.kubeconfig
#配置客户端认证
kubectl config set-credentials kube-proxy --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
#配置关联
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
#配置默认关联
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
#拷贝到其他机器
scp kube-proxy.kubeconfig [email protected]:/etc/kubernetes/
scp kube-proxy.kubeconfig [email protected]:/etc/kubernetes/
#创建kube-proxy.service 文件
需要安装ipvsadm ipset conntrack软件
yum install ipset ipvsadm conntrack-tools.x86_64 -y
cd /etc/kubernetes/
vim kube-proxy.config.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 10.39.10.154
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.254.64.0/18
healthzBindAddress: 10.39.10.154:10256
hostnameOverride: kubernetes-master-154
kind: KubeProxyConfiguration
metricsBindAddress: 10.39.10.154:10249
mode: "ipvs"
vi /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.config.yaml --logtostderr=true --v=1
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
这个目录/var/lib/kube-proxy 启动的如果失败请手动创建
启动kube-proxy
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
Node端
单Node 部分需要部署的组建有docker,calico,kubelet,kube-proxy这几个组件。Node 节点基于nginx 负载API 做Master HA
在每个node 上启动一个nginx,每个nginx 反向代理所有的api server;
node 上kubelet kube-proxy 连接本地的nginx 代理端口
当nginx 发现无法链接后端时会自动剔除有问题的api server,从而实现api server 的HA。
# 发布证书
scp ca.pem kube-proxy.pem kube-proxy-key.pem [email protected]:/etc/kubernetes/ssl/
创建nginx 代理
yum install epel-release -y
yum install nginx -y
cat << EOF >> /etc/nginx/nginx.conf
error_log stderr notice;
worker_processes auto;
events {
multi_accept on;
use epoll;
worker_connections 1024;
}
stream {
upstream kube_apiserver {
least_conn;
server 10.39.10.154:6443;
server 10.39.10.156:6443;
server 10.39.10.159:6443;
}
server {
listen 0.0.0.0:6443;
proxy_pass kube_apiserver;
proxy_timeout 10m;
proxy_connect_timeout 1s;
}
}
EOF
# 配置nginx 基于docker 进程,然后配置systemd来启动
vim /etc/systemd/system/nginx-proxy.service
[Unit]
Description=kubernetes apiserver docker wrapper
Wants=docker.socket
After=docker.service
[Service]
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/docker run -p 127.0.0.1:6443:6443 -v /etc/nginx:/etc/nginx --name nginx-proxy --net=host --restart=on-failure:5 --memory=512M nginx:1.13.7-alpine
ExecStartPre=-/usr/bin/docker rm -f nginx-proxy
ExecStop=/usr/bin/docker stop nginx-proxy
Restart=always
RestartSec=15s
TimeoutStartSec=30s
[Install]
WantedBy=multi-user.target
# 启动nginx
systemctl daemon-reload
systemctl start nginx-proxy
systemctl enable nginx-proxy
systemctl status nginx-proxy
配置kubelet.service 文件
vi /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet --hostname-override=kubernetes-64 --pod-infra-container-image=harbor.enncloud.cn/enncloud/pause-amd64:3.1 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/kubelet.config.json --cert-dir=/etc/kubernetes/ssl --logtostderr=true --v=2
[Install]
WantedBy=multi-user.target
#创建kubelet config 配置文件
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/ssl/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "172.16.1.66",
"port": 10250,
"readOnlyPort": 0,
"cgroupDriver": "cgroupfs",
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"featureGates": {
"RotateKubeletClientCertificate": true,
"RotateKubeletServerCertificate": true
},
"MaxPods": "512",
"failSwapOn": false,
"containerLogMaxSize": "10Mi",
"containerLogMaxFiles": 5,
"clusterDomain": "cluster.local.",
"clusterDNS": ["10.254.0.2"]
}
#添加node的token(master操作)
kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:kubernetes-node-160 --kubeconfig ~/.kube/config
#配置集群参数
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=kubernetes-node-160-bootstrap.kubeconfig
#配置客户端认证
kubectl config set-credentials kubelet-bootstrap --token=ap4lcp.3yai1to1f98sfray --kubeconfig=kubernetes-node-160-bootstrap.kubeconfig
#配置关键
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubernetes-node-160-bootstrap.kubeconfig
#配置默认关联
kubectl config use-context default --kubeconfig=kubernetes-node-160-bootstrap.kubeconfig
拷贝scp kubernetes-node-160-bootstrap.kubeconfig [email protected]:/etc/kubernetes/bootstrap.kubeconfig
systemctl restart kubelet
## 如果注册master失败,因为配置写错或者其他原因,需要重新注册的,需要删除原有的一些文件,比如
/etc/kubernetes/ssl 目录下删除kubelet相关的证书和key, 删除/etc/kubernetes下的 kubelet.kubeconfig 文件, 重新启动kubelet 即可,重新生成这些文件
#配置kube-proxy.service
vim /etc/kubernetes/kube-proxy.config.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 10.39.10.160
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.254.64.0/18
healthzBindAddress: 10.39.10.160:10256
hostnameOverride: kubernetes-node-160
kind: KubeProxyConfiguration
metricsBindAddress: 10.39.10.160:10249
mode: "ipvs"
#创建kube-proxy目录
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.config.yaml --logtostderr=true --v=1
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#启动
systemctl start kube-proxy
systemctl status kube-proxy
验证nodes
kubectl get nodes
查看kubelet 生成文件
#配置calico 网络
[https://docs.projectcalico.org/v3.2/getting-started/kubernetes/]()
1.安装calico 的RBAC角色
kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/rbac.yaml
2.安装calico 需要修改配置内容
kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/calico.yaml
修改calico.yaml
etcd_endpoints: 这里填写etcd 集群信息
etcd_ca: "/calico-secrets/etcd-ca"
etcd_cert: "/calico-secrets/etcd-cert"
etcd_key: "/calico-secrets/etcd-key"
data:
# Populate the following files with etcd TLS configuration if desired, but leave blank if
# not using TLS for etcd.
# This self-hosted install expects three files with the following names. The value
# should be base64 encoded strings of the entire contents of each file.
在这里讲证书编码为base64位然后填写在这里
calico 常用命令
安装calicoctl
curl -O -L https://github.com/projectcalico/calicoctl/releases/download/v3.2.1/calicoctl /usr/local/bin/
chmod +x calicoctl
[https://docs.projectcalico.org/v3.1/usage/calicoctl/configure/etcd]()
创建calicoctl.cfg文件
vim /etc/calico/calicoctl.cfg
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
etcdEndpoints: https://10.39.10.154:2379,https://10.39.10.156:2379,https://10.39.10.160:2379
etcdKeyFile: /etc/kubernetes/ssl/etcd-key.pem
etcdCertFile: /etc/kubernetes/ssl/etcd.pem
etcdCACertFile: /etc/kubernetes/ssl/ca.pem
就可以执行calicoctl node status
原文地址:http://blog.51cto.com/xiaocainiaox/2169475
时间: 2024-10-14 04:24:25