前置条件
- 系统要求:64位centos7.6
- 关闭防火墙和selinux
- 关闭操作系统swap分区(使用k8s不推荐打开)
- 请预配置好每个节点的hostname保证不重名即可
- 请配置第一个master能秘钥免密登入所有节点(包括自身)
环境说明
本手册安装方式适用于小规模使用
多主模式(最少三个), 每个master节点上需要安装keepalived
准备工作(每个节点都需要执行)
Docker和kubernetes软件源配置
# 切换到配置目录
cd /etc/yum.repos.d/
# 配置docker-ce阿里源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 配置kubernetes阿里源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
配置内核相关参数
cat <<EOF > /etc/sysctl.d/ceph.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
安装相应软件包
# 安装kubeadm kubelet kubectl
yum install kubeadm kubectl kubelet -y
# 开机启动kubelet和docker
systemctl enable docker kubelet
# 启动docker
systemctl start docker
部署
安装keepalived(在所有master上执行)
# 此处如果有Lb可省略 直接使用LB地址
# 安装时候请先在初始化master上执行,保证VIP附着在初始化master上,否则请关闭其他keepalived
# 安装完成后可根据自己业务需要实现健康监测
yum install keepalived -y
# 备份keepalived原始文件
mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
# 生成新的keepalived配置文件,文中注释部分对每台master请进行修改
cat <<EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id k8s-master1 #主调度器的主机名
vrrp_mcast_group4 224.26.1.1
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 66
nopreempt
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
10.20.1.8 #VIP地址声明
}
}
EOF
# 配置keepalived开机启动和启动keepalived
systemctl enable keepalived
systemctl start keepalived
生成kubeadm master 配置文件
cd && cat <<EOF > kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
certSANs:
- "172.29.2.188" #请求改为你的vip地址
controlPlaneEndpoint: "172.29.2.188:6443" #请求改为你的vip地址
imageRepository: registry.cn-hangzhou.aliyuncs.com/peter1009
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
EOF
初始化第一个master
# 使用上一步生成的kubeadm.yaml
kubeadm init --config kubeadm.yaml
# 执行完上一步输出如下
[email protected]:~# kubeadm init --config kubeadm.yaml
I0522 06:20:13.352644 2622 version.go:96] could not fetch a Kubernetes version from
......... 此处省略
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498 --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72 --experimental-control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498 --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72
安装集群
cat <<EOF > copy.sh
CONTROL_PLANE_IPS="172.16.10.101 172.16.10.102" # 修改这两个ip地址为你第二/第三masterip地址
for host in ${CONTROL_PLANE_IPS}; do
ssh $host mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:/etc/kubernetes/pki/etcd/ca.crt
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:/etc/kubernetes/pki/etcd/ca.key
scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
done
EOF
# 如果未配置免密登录,该步骤讲失败
bash -x copy.sh
# 在当前节点执行提示内容,使kubectl能访问集群
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 在其他master节点上配置执行提示内容(必须要copy.sh文件执行成功以后)
kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498 --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72 --experimental-control-plane
# 在其他非master的节点上配置执行提示内容
kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498 --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72
安装flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
检查是否安装完成
[email protected]:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s4 Ready master 20m v1.14.2
[email protected]:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s4 Ready master 20m v1.14.2
[email protected]:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-8cc96f57d-cfr4j 1/1 Running 0 20m
kube-system coredns-8cc96f57d-stcz6 1/1 Running 0 20m
kube-system etcd-k8s4 1/1 Running 0 19m
kube-system kube-apiserver-k8s4 1/1 Running 0 19m
kube-system kube-controller-manager-k8s4 1/1 Running 0 19m
kube-system kube-flannel-ds-amd64-k4q6q 1/1 Running 0 50s
kube-system kube-proxy-lhjsf 1/1 Running 0 20m
kube-system kube-scheduler-k8s4 1/1 Running 0 19m
测试是否能正常使用集群
# 取消节点污点,使master能被正常调度, k8s4请更改为你自有集群的nodename
kubectl taint node k8s4 node-role.kubernetes.io/master:NoSchedule-
# 创建nginx deploy
[email protected]:~# kubectl create deploy nginx --image nginx
deployment.apps/nginx created
[email protected]:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-65f88748fd-9sk6z 1/1 Running 0 2m44s
# 暴露nginx到集群外
[email protected]:~# kubectl expose deploy nginx --port=80 --type=NodePort
service/nginx exposed
[email protected]:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25m
nginx NodePort 10.104.109.234 <none> 80:32129/TCP 5s
[email protected]:~# curl 127.0.0.1:32129
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
原文地址:https://blog.51cto.com/linuxmaizi/2419889
时间: 2024-11-05 20:31:57