kubeadm部署kubernetes v1.17.4 高可用master节点

环境说明:

#操作系统:centos7
#docker版本:19.03.8
#kubernetes版本:v1.17.4
#K8S master 节点IP:192.168.2.175,192.168.2.176,192.168.2.177
#K8S worker节点IP:192.168.2.185,192.168.2.185
#网络插件:flannel
#kube-proxy网络转发: ipvs
#kubernetes源:使用阿里云源
#service-cidr:10.96.0.0/16
#pod-network-cidr:10.244.0.0/16
# maser 高可用工具:confd+nginx  # 可以查看本博客使用方案
# 接入集群使用ip:192.168.2.175

部署准备:

# 操作在所有节点进行
# 修改内核参数:
关闭swap
 vim /etc/sysctl.conf
 vm.swappiness=0
 net.ipv4.ip_forward = 1
 net.bridge.bridge-nf-call-ip6tables = 1
 net.bridge.bridge-nf-call-iptables = 1
 net.bridge.bridge-nf-call-arptables = 1
 sysctl -p
 临时生效
 swapoff -a && sysctl -w vm.swappiness=0
 # 修改 fstab 不在挂载 swap
 vi /etc/fstab
# /dev/mapper/centos-swap swap                    swap    defaults        0 0
# 安装docker
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 添加docker配置
mkdir -p /etc/docker
vim /etc/docker/daemon.json
{
    "max-concurrent-downloads": 20,
    "data-root": "/apps/docker/data",
    "exec-root": "/apps/docker/root",
    "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"],
    "log-driver": "json-file",
    "bridge": "docker0",
    "oom-score-adjust": -1000,
    "debug": false,
    "log-opts": {
        "max-size": "100M",
        "max-file": "10"
    },
    "default-ulimits": {
        "nofile": {
            "Name": "nofile",
            "Hard": 1024000,
            "Soft": 1024000
        },
        "nproc": {
            "Name": "nproc",
            "Hard": 1024000,
            "Soft": 1024000
        },
       "core": {
            "Name": "core",
            "Hard": -1,
            "Soft": -1
      }

    }
}

# 安装依赖
 yum install -y   yum-utils  ipvsadm  telnet  wget  net-tools  conntrack  ipset  jq  iptables  curl  sysstat  libseccomp  socat  nfs-utils  fuse  fuse-devel
# 安装docker依赖
 yum install -y    python-pip python-devel yum-utils device-mapper-persistent-data lvm2
 # 安装docker
yum install -y docker-ce
# reload service 配置
 systemctl daemon-reload
# 重启docker
 systemctl restart docker
# 设置开机启动
systemctl enable docker
#自动加载ipvs 创建开机加载
cat << EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
# /etc/sysconfig/modules/ipvs.modules 可执行权限
chmod +x /etc/sysconfig/modules/ipvs.modules
# 执行 /etc/sysconfig/modules/ipvs.modules
/etc/sysconfig/modules/ipvs.modules
-----------------------------------
# kubernetes 源配置
cat << EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 安装kubeadm kubelet kubectl
yum install -y kubeadm kubelet kubectl 

初始化kubernetes master节点

# 操作节点:192.168.2.175
kubeadm init --apiserver-advertise-address=0.0.0.0                      --apiserver-cert-extra-sans=127.0.0.1                      --image-repository=registry.aliyuncs.com/google_containers                      --ignore-preflight-errors=all                       --kubernetes-version=v1.17.4                      --service-cidr=10.96.0.0/16                      --pod-network-cidr=10.244.0.0/16                       --upload-certs                      --token-ttl=24h0m0s                      --control-plane-endpoint=192.168.2.175
# 初始化细节
[[email protected] ~]# kubeadm init --apiserver-advertise-address=0.0.0.0 >                      --apiserver-cert-extra-sans=127.0.0.1 >                      --image-repository=registry.aliyuncs.com/google_containers >                      --ignore-preflight-errors=all  >                      --kubernetes-version=v1.17.4 >                      --service-cidr=10.96.0.0/16 >                      --pod-network-cidr=10.244.0.0/16  >                      --upload-certs >                      --token-ttl=24h0m0s >                      --control-plane-endpoint=192.168.2.175
W0323 10:50:59.146189    1766 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0323 10:50:59.146465    1766 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.4
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Hostname]: hostname "2-175" could not be reached
        [WARNING Hostname]: hostname "2-175": lookup 2-175 on 192.168.1.169:53: no such host
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service‘
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull‘
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [2-175 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.175 192.168.2.175 127.0.0.1
]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [2-175 localhost] and IPs [192.168.2.175 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [2-175 localhost] and IPs [192.168.2.175 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0323 10:51:07.230835    1766 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0323 10:51:07.233471    1766 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 29.503800 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
4aebc8cca4764e99b0a9a0475429f8a0636a49bd584f0a455db3cf8253b85c8a
[mark-control-plane] Marking the node 2-175 as control-plane by adding the label "node-role.kubernetes.io/master=‘‘"
[mark-control-plane] Marking the node 2-175 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: x7mj2b.rqcudt3fdev5863y
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.2.175:6443 --token x7mj2b.rqcudt3fdev5863y     --discovery-token-ca-cert-hash sha256:f02b6f6b7c74f8e7aad199bbe893d32e05fafb40b734e63669e0d2b2ca5e0ef6     --control-plane --certificate-key 4aebc8cca4764e99b0a9a0475429f8a0636a49bd584f0a455db3cf8253b85c8a

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.2.175:6443 --token x7mj2b.rqcudt3fdev5863y     --discovery-token-ca-cert-hash sha256:f02b6f6b7c74f8e7aad199bbe893d32e05fafb40b734e63669e0d2b2ca5e0ef6
# 加入master节点使用
  kubeadm join 192.168.2.175:6443 --token x7mj2b.rqcudt3fdev5863y     --discovery-token-ca-cert-hash sha256:f02b6f6b7c74f8e7aad199bbe893d32e05fafb40b734e63669e0d2b2ca5e0ef6     --control-plane --certificate-key 4aebc8cca4764e99b0a9a0475429f8a0636a49bd584f0a455db3cf8253b85c8a
# 修改kube-proxy 配置
# 复制kubectl 授权配置文件
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 查看 kube-proxy 是否正常运行
[[email protected] ~]# kubectl get pod -A|grep kube-proxy
kube-system   kube-proxy-t87dw                1/1     Running   0          3m31s
[[email protected] ~]# kubectl get cm -A|grep kube-proxy
kube-system   kube-proxy                           2      3m53s
# 修改kube-proxy 数据转发为IPvs
kubectl -n kube-system edit cm kube-proxy
apiVersion: v1
data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 10.244.0.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      masqueradeAll: true # 增加 –masquerade-all 选项,以确保反向流量通过
      masqueradeBit: 14
      minSyncPeriod: 0s
      syncPeriod: 30s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 5s
      scheduler: "rr"
      strictARP: false
      syncPeriod: 5s
    kind: KubeProxyConfiguration
    metricsBindAddress: "0.0.0.0" #启用kube-proxy 监控指标metrics
    mode: "ipvs"   # 开启ipvs 模式
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    udpIdleTimeout: 0s
    winkernel:
      enableDSR: false
      networkName: ""
      sourceVip: ""
  kubeconfig.conf: |-
    apiVersion: v1
    kind: Config
    clusters:
    - cluster:
        certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        server: https://127.0.0.1:6443 # 修改连接master ip为127.0.0.1
      name: default
    contexts:
    - context:
        cluster: default
        namespace: default
        user: default
      name: default
    current-context: default
    users:
    - name: default
      user:
        tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
  creationTimestamp: "2020-03-23T02:51:38Z"
  labels:
    app: kube-proxy
  name: kube-proxy
  namespace: kube-system
  resourceVersion: "1825"
  selfLink: /api/v1/namespaces/kube-system/configmaps/kube-proxy
  uid: 462666da-ed39-4a16-a1fe-9da034badc38
# 其它的根据自己需要进行修改
# 保存修改文件:wq!
# 删除已经运行的kube-proxy pod
[[email protected] ~]# kubectl -n kube-system get pod | grep kube-proxy
kube-proxy-t87dw                1/1     Running   0          15m
# 删除POD
[[email protected] ~]#  kubectl -n kube-system delete pod kube-proxy-t87dw
pod "kube-proxy-t87dw" deleted
# 查看ipvs 启动是否正常
[[email protected] ~]# ip a| grep ipvs
5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
    inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
    inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
# 已经正常启动
# 部署第二个master 节点 192.168.2.176
kubeadm join 192.168.2.175:6443 --token x7mj2b.rqcudt3fdev5863y     --discovery-token-ca-cert-hash sha256:f02b6f6b7c74f8e7aad199bbe893d32e05fafb40b734e63669e0d2b2ca5e0ef6     --control-plane --certificate-key 4aebc8cca4764e99b0a9a0475429f8a0636a49bd584f0a455db3cf8253b85c8a
W0323 11:10:26.011187    1771 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0323 11:10:26.023439    1771 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0323 11:10:26.025626    1771 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2020-03-23T11:10:55.271+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.2.176:2379","attempt":0,"error"
:"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node 2-176 as control-plane by adding the label "node-role.kubernetes.io/master=‘‘"
[mark-control-plane] Marking the node 2-176 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run ‘kubectl get nodes‘ to see this node join the cluster.
#  部署第三个master 节点 192.168.2.177
[[email protected] ~]# kubeadm join 192.168.2.175:6443 --token x7mj2b.rqcudt3fdev5863y >     --discovery-token-ca-cert-hash sha256:f02b6f6b7c74f8e7aad199bbe893d32e05fafb40b734e63669e0d2b2ca5e0ef6 >     --control-plane --certificate-key 4aebc8cca4764e99b0a9a0475429f8a0636a49bd584f0a455db3cf8253b85c8a
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Hostname]: hostname "2-177" could not be reached
        [WARNING Hostname]: hostname "2-177": lookup 2-177 on 192.168.1.169:53: no such host
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service‘
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull‘
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [2-177 localhost] and IPs [192.168.2.177 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [2-177 localhost] and IPs [192.168.2.177 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [2-177 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.177 192.168.2.175 127.0.0.1
]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0323 11:12:03.147147    1744 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0323 11:12:03.164313    1744 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0323 11:12:03.167170    1744 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2020-03-23T11:12:27.840+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.2.177:2379","attempt":0,"error"
:"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node 2-177 as control-plane by adding the label "node-role.kubernetes.io/master=‘‘"
[mark-control-plane] Marking the node 2-177 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run ‘kubectl get nodes‘ to see this node join the cluster.
# 设置开机启动
systemctl enable kubelet

验证master集群是否部署正常及相关配置手动修改

[[email protected] ~]# kubectl get nodes
NAME    STATUS     ROLES    AGE     VERSION
2-175   NotReady   master   22m     v1.17.4
2-176   NotReady   master   3m46s   v1.17.4
2-177   NotReady   master   2m8s    v1.17.4
# 三个节点正常加入
# 修改master 配置
# 修改etcd.yaml 配置 192.168.2.177 不需要修改 主要修改192.168.2.175,192.168.2.176
#进入对应目录 主要修改参数-initial-cluster
cd /etc/kubernetes/manifests/
vim etcd.yaml
 --initial-cluster=2-177=https://192.168.2.177:2380,2-176=https://192.168.2.176:2380,2-175=https://192.168.2.175:2380
 #保存等带pod重启成功
 # 查看etcd 是否修改正常启动
 [[email protected] manifests]# ps -ef | grep initial-cluster
root       18722   18700  5 11:22 ?        00:00:10 etcd --advertise-client-urls=https://192.168.2.175:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --in
itial-advertise-peer-urls=https://192.168.2.175:2380 --initial-cluster=2-177=https://192.168.2.177:2380,2-176=https://192.168.2.176:2380,2-175=https://192.168.2.175:2380 --key-file=/etc/kubernetes/pki/etcd/ser
ver.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.2.175:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.2.175:2380 --name=2-175 --peer-cert-file=/etc/kuber
netes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kuber
netes/pki/etcd/ca.crt
 [[email protected] manifests]# kubectl get pod -A| grep etcd
kube-system   etcd-2-175                      1/1     Running   0          22s
kube-system   etcd-2-176                      1/1     Running   0          2m57s
kube-system   etcd-2-177                      1/1     Running   0          12m
# 所有etcd 配置重启成功
# 修改kube-apiserver etcd 连接ip 所以节点修改
# 修改前
[[email protected] manifests]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
vim kube-apiserver.yaml
--etcd-servers=https://192.168.2.175:2379,https://192.168.2.176:2379,https://192.168.2.177:2379
[[email protected] manifests]# ps -ef| grep kube-apiserver
root       15150   15133 50 11:29 ?        00:00:13 kube-apiserver --advertise-address=192.168.2.177 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-
admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/p
ki/apiserver-etcd-client.key --etcd-servers=https://192.168.2.175:2379,https://192.168.2.176:2379,https://192.168.2.177:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet
-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-clien
t.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-e
xtra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --servic
e-cluster-ip-range=10.96.0.0/16 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
# 修改完成后
[[email protected] manifests]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}
# 3etcd 节点正常
# 修改controller-manager.conf  kubelet.conf scheduler.conf 配置连接地址为127.0.0.1 每个master 节点
/etc/kubernetes
# controller-manager.conf  kubelet.conf scheduler.conf  192.168.2.175 修改为127.0.0.1
# 重启kubelet
systemctl restart kubelet
# 重启controller-manager scheduler pod
cd /etc/kubernetes/manifests
mv  kube-controller-manager.yaml  kube-scheduler.yaml /opt/
docker ps
mv /opt/{kube-controller-manager.yaml,kube-scheduler.yaml} ./
docker ps
# 等等重启成功
# 查看6443 端口连接情况
[[email protected] manifests]# netstat -tnp| grep 6443
tcp        0      0 127.0.0.1:9230          127.0.0.1:6443          ESTABLISHED 2469/kube-proxy
tcp        0      0 127.0.0.1:11418         127.0.0.1:6443          ESTABLISHED 25863/kube-schedule
tcp        0      0 127.0.0.1:11416         127.0.0.1:6443          ESTABLISHED 25863/kube-schedule
tcp        0      0 127.0.0.1:11424         127.0.0.1:6443          ESTABLISHED 25870/kube-controll
tcp        0      0 127.0.0.1:10798         127.0.0.1:6443          ESTABLISHED 21299/kubelet
tcp6       0      0 ::1:6443                ::1:14928               ESTABLISHED 15150/kube-apiserve
tcp6       0      0 ::1:14928               ::1:6443                ESTABLISHED 15150/kube-apiserve
tcp6       0      0 127.0.0.1:6443          127.0.0.1:11418         ESTABLISHED 15150/kube-apiserve
tcp6       0      0 127.0.0.1:6443          127.0.0.1:9230          ESTABLISHED 15150/kube-apiserve
tcp6       0      0 127.0.0.1:6443          127.0.0.1:10798         ESTABLISHED 15150/kube-apiserve
tcp6       0      0 127.0.0.1:6443          127.0.0.1:11416         ESTABLISHED 15150/kube-apiserve
tcp6       0      0 127.0.0.1:6443          127.0.0.1:11424         ESTABLISHED 15150/kube-apiserve
# 可以看到每个模块都连接到127.0.0.1 6443 端口
[[email protected] manifests]# kubectl -n kube-system get pod
NAME                            READY   STATUS    RESTARTS   AGE
coredns-9d85f5447-2mfkv         0/1     Pending   0          55m
coredns-9d85f5447-shvf6         0/1     Pending   0          55m
etcd-2-175                      1/1     Running   0          22m
etcd-2-176                      1/1     Running   0          25m
etcd-2-177                      1/1     Running   0          34m
kube-apiserver-2-175            1/1     Running   0          15m
kube-apiserver-2-176            1/1     Running   0          15m
kube-apiserver-2-177            1/1     Running   0          17m
kube-controller-manager-2-175   1/1     Running   4          6m31s
kube-controller-manager-2-176   1/1     Running   0          4m7s
kube-controller-manager-2-177   1/1     Running   1          3m25s
kube-proxy-5fd2z                1/1     Running   0          34m
kube-proxy-dsbqr                1/1     Running   0          36m
kube-proxy-plb5j                1/1     Running   0          39m
kube-scheduler-2-175            1/1     Running   3          6m31s
kube-scheduler-2-176            1/1     Running   0          4m7s
kube-scheduler-2-177            1/1     Running   2          3m25s
# 查看集群状态
[[email protected] manifests]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}
# 查看kube-controller-manager  kube-scheduler leader 选举
[[email protected] manifests]# kubectl -n kube-system get ep kube-controller-manager -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: ‘{"holderIdentity":"2-176_e162da3f-aa29-49c0-aeeb-c99f68d01179","leaseDurationSeconds":15,"acquireTime":"2020-03-23T03:43:35Z","renewTime":"2020-03-23T03:47:39Z","
leaderTransitions":6}‘
  creationTimestamp: "2020-03-23T02:51:38Z"
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "8710"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: 175dba00-3de8-4b72-8823-79b261be24af
[[email protected] manifests]# kubectl -n kube-system get ep kube-scheduler -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: ‘{"holderIdentity":"2-177_5f981562-556f-4e9a-8aea-d0eeaef4211a","leaseDurationSeconds":15,"acquireTime":"2020-03-23T03:46:29Z","renewTime":"2020-03-23T03:48:17Z","
leaderTransitions":7}‘
  creationTimestamp: "2020-03-23T02:51:37Z"
  name: kube-scheduler
  namespace: kube-system
  resourceVersion: "8808"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
  uid: 075b24a1-df45-4417-bd4c-19b3b09ce61f 

部署Worker 节点

# 部署节点:192.168.2.185,192.168.2.187
# 启动
docker run -tid --restart=always --network=host --name=ha-proxy -e "CP_HOSTS=192.168.2.175,192.168.2.176,192.168.2.177" juestnow/ha-tools:v1.17.9 CP_HOSTS=192.168.2.175,192.168.2.176,192.168.2.177
# 也可以这样
docker run -tid --restart=always --network=host --name=ha-proxy -e "CP_HOSTS=192.168.2.175,192.168.2.176,192.168.2.177" juestnow/ha-tools:v1.17.9
# 不过这样就不能很直观看到后端转发服务器地址一般加上很好的
[[email protected] ~]# docker run -tid --restart=always > --network=host > --name=ha-proxy -e "CP_HOSTS=192.168.2.175,192.168.2.176,192.168.2.177" > juestnow/ha-tools:v1.17.9 CP_HOSTS=192.168.2.175,192.168.2.176,192.168.2.177
ac50b397d134fdf20fa1ee75b4f0ad8f73ec4c11af5ae856009518d0b491fb33
[[email protected] ~]# docker run -tid --restart=always > --network=host > --name=ha-proxy -e "CP_HOSTS=192.168.2.175,192.168.2.176,192.168.2.177" > juestnow/ha-tools:v1.17.9 CP_HOSTS=192.168.2.175,192.168.2.176,192.168.2.177
6fe16ae05f02524395d5a512ca64cc9e74d7e70e7193d29371fcad53dd8ae985
# 查看6443 端口是否监听curl 测试是否有数据返回有证明一切配置成功
[[email protected] ~]# netstat -tnlp| grep 6443
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      1855/nginx: master
[[email protected] ~]# curl -k https://127.0.0.1:6443
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {

  },
  "code": 403
}[[email protected] ~]#
# 修改kube-public 命名空间 cluster-info configmaps 配置
 kubectl -n kube-public edit  configmaps cluster-info
 # Please edit the object below. Lines beginning with a ‘#‘ will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  jws-kubeconfig-x7mj2b: eyJhbGciOiJIUzI1NiIsImtpZCI6Ing3bWoyYiJ9..LSpL_mySpjfHJ0sAOL49pQxS1nsdDEc9T5fqcgSyuD8
  kubeconfig: |
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01ETXlNekF5TlRFd01Wb1hEVE13TURNeU1UQXlOVEV3TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTXhSClNwQ053dUpYTXNQY1VKUjBrTi9CblB3MlY5MUlGRVNCMmRJWkNrdWFJN05XUmh5NWlWZVRVUVNEWVZoRkhENVYKWTdxYkxmWllZc3lZRklhU2h1RUZ6Y3d5Rm5HZlBwdEpPVVhHWGNiRmMvdDdjWjR1Z2dOLzdibVFRUGVaVkZnUApSajJqZFU2YWkzdkVkMTNpeURnMmJCSEdvZjdYY0REc3NEZVM4UytWaldyc3lENnV4YTRTWWl1RlNaQjYvSXp3ClYrZXV5UXhHcnlVMDdNbjdrTy9KQkNvT2FUK01CNHlWWEppZW5xZ2tiTms2aDQzenBtQ3RUazJJNDRTeERnWEcKeWZMRVlaQlVKOU1vZmZjQXBxaWN6SU1Hd3RoQzE5VUtuWkhYVkhralVFeCt4dEtWb28yaXpmWElpOVhtSkJ2dwpQdEZjUGI0VHZHajFtaWZ5WTFzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFFN3dsYmgvbGpkWGUrYUJCVEI3UE1nNmVLeTQKTmxITjBjUnhpT0dTYmVCV3VFRlU0Wkh6MGNia1RlNmtjVGJzbTl0TStRZ2NSWmVTYkpPdkQxM0pPcTYxMGVJTApxRDNrRE1uYVFGckgyd3g5L1ZpMmE3S1diaHJycWE5Qlg2QnRsems1WkpkT1pZNXQ3WUxVcDR2a1RSa3A4MXRoCjl0MUcxK2ZTRWNJdExLaGkrTml6VkpKSWV6SUwvdWFzOHVCcDFWZGllU000WW5rNlU2cjN5RGM1UHE5OU9hdFgKbWl1NEtHZkIyaExzSHo4WDI0ZFhEY3JTYlZnSGRtUjBOcWNRa0hoZTcvNWxQd3p5MFUxRjJJTHhaZVY1MGhKLwpmQWtCTmFCVkQ4Z2w4dkVoN2VBdmp0WEYyQ0VLK0dmUkZFUlJwU0ZTdmgzaFZzWTZTUi9UK3FkZGJsWT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
        server: https://127.0.0.1:6443
      name: ""
    contexts: null
    current-context: ""
    kind: Config
    preferences: {}
    users: null
kind: ConfigMap
metadata:
  creationTimestamp: "2020-03-23T02:51:37Z"
  name: cluster-info
  namespace: kube-public
  resourceVersion: "336"
  selfLink: /api/v1/namespaces/kube-public/configmaps/cluster-info
  uid: 3bcd2132-063b-4426-8b96-b5748dc90ca8
# 保存退出
# 修改kube-system 命名空间  kubeadm-config  configmaps 配置
kubectl -n kube-system edit configmaps kubeadm-config
controlPlaneEndpoint: 192.168.2.175:6443
#修改为 controlPlaneEndpoint: 127.0.0.1:6443
# 保存退出
# 部署Worker 节点
kubeadm join 192.168.2.175:6443 --token x7mj2b.rqcudt3fdev5863y     --discovery-token-ca-cert-hash sha256:f02b6f6b7c74f8e7aad199bbe893d32e05fafb40b734e63669e0d2b2ca5e0ef6
[[email protected] ~]# kubeadm join 192.168.2.175:6443 --token x7mj2b.rqcudt3fdev5863y >     --discovery-token-ca-cert-hash sha256:f02b6f6b7c74f8e7aad199bbe893d32e05fafb40b734e63669e0d2b2ca5e0ef6
W0323 12:03:41.671078    1859 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Hostname]: hostname "2-185" could not be reached
        [WARNING Hostname]: hostname "2-185": lookup 2-185 on 192.168.1.169:53: no such host
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service‘
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes‘ on the control-plane to see this node join the cluster.
[[email protected] ~]# kubeadm join 192.168.2.175:6443 --token x7mj2b.rqcudt3fdev5863y >     --discovery-token-ca-cert-hash sha256:f02b6f6b7c74f8e7aad199bbe893d32e05fafb40b734e63669e0d2b2ca5e0ef6
W0323 12:04:30.893514    1886 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Hostname]: hostname "2-187" could not be reached
        [WARNING Hostname]: hostname "2-187": lookup 2-187 on 192.168.1.169:53: no such host
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service‘
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes‘ on the control-plane to see this node join the cluster.
# 查看网络连接
[[email protected] ~]# netstat -tnp| grep 6443
tcp        0      0 192.168.2.185:28426     192.168.2.175:6443      ESTABLISHED 1832/nginx: worker
tcp        0      0 127.0.0.1:6443          127.0.0.1:34380         ESTABLISHED 1832/nginx: worker
tcp        0      0 127.0.0.1:6443          127.0.0.1:34368         ESTABLISHED 1832/nginx: worker
tcp        0      0 127.0.0.1:34368         127.0.0.1:6443          ESTABLISHED 1966/kubelet
tcp        0      0 127.0.0.1:34380         127.0.0.1:6443          ESTABLISHED 2293/kube-proxy
tcp        0      0 192.168.2.185:28418     192.168.2.175:6443      ESTABLISHED 1832/nginx: worker

[[email protected] ~]# netstat -tnp| grep 6443
tcp        0      0 127.0.0.1:44804         127.0.0.1:6443          ESTABLISHED 2340/kube-proxy
tcp        0      0 127.0.0.1:6443          127.0.0.1:44804         ESTABLISHED 1856/nginx: worker
tcp        0      0 127.0.0.1:44792         127.0.0.1:6443          ESTABLISHED 1995/kubelet
tcp        0      0 192.168.2.187:38590     192.168.2.176:6443      ESTABLISHED 1856/nginx: worker
tcp        0      0 127.0.0.1:6443          127.0.0.1:44792         ESTABLISHED 1856/nginx: worker
tcp        0      0 192.168.2.187:38598     192.168.2.176:6443      ESTABLISHED 1856/nginx: worker
# 查看集群
[[email protected] manifests]# kubectl get nodes
NAME    STATUS     ROLES    AGE     VERSION
2-175   NotReady   master   75m     v1.17.4
2-176   NotReady   master   56m     v1.17.4
2-177   NotReady   master   54m     v1.17.4
2-185   NotReady   <none>   3m9s    v1.17.4
2-187   NotReady   <none>   2m21s   v1.17.4
# 所有节点已经加入
#设置开机启动
systemctl enable kubelet

部署网络插件flannel

vim kube-flannel.yaml
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
     {
     "name":"cni0",
     "cniVersion":"0.3.1",
     "plugins":[
       {
         "type":"flannel",
         "delegate":{
           "forceAddress":false,
           "hairpinMode": true,
           "isDefaultGateway":true
         }
       },
       {
         "type":"portmap", # pod 宿主机端口映射
         "capabilities":{
           "portMappings":true
         }
       },
     {
       "name": "mytuning",  # 容器内部net 参数优化
       "type": "tuning",
       "sysctl": {
               "net.core.somaxconn": "65535",
               "net.ipv4.ip_local_port_range": "1024 65535",
               "net.ipv4.tcp_keepalive_time": "600",
               "net.ipv4.tcp_keepalive_probes": "10",
               "net.ipv4.tcp_keepalive_intvl": "30"
       }
     }
     ]
     }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16", # pod cird
      "Backend": {
        "Type": "VXLAN",
        "Directrouting": true # 使用混合模式
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
# 部署flannel
[email protected]# kubectl apply -f kube-flannel.yaml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
# 查看集群状态
[[email protected] manifests]# kubectl get nodes
NAME    STATUS     ROLES    AGE     VERSION
2-175   NotReady   master   75m     v1.17.4
2-176   NotReady   master   56m     v1.17.4
2-177   NotReady   master   54m     v1.17.4
2-185   NotReady   <none>   3m9s    v1.17.4
2-187   NotReady   <none>   2m21s   v1.17.4
[[email protected] manifests]# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
2-175   Ready    master   84m   v1.17.4
2-176   Ready    master   65m   v1.17.4
2-177   Ready    master   63m   v1.17.4
2-185   Ready    <none>   12m   v1.17.4
2-187   Ready    <none>   11m   v1.17.4
# 所有节点已经部署正常
[[email protected] manifests]# kubectl get pod -A
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE
kube-system   coredns-9d85f5447-2mfkv         1/1     Running   0          85m
kube-system   coredns-9d85f5447-shvf6         1/1     Running   0          85m
kube-system   etcd-2-175                      1/1     Running   0          53m
kube-system   etcd-2-176                      1/1     Running   0          55m
kube-system   etcd-2-177                      1/1     Running   0          64m
kube-system   kube-apiserver-2-175            1/1     Running   0          45m
kube-system   kube-apiserver-2-176            1/1     Running   0          46m
kube-system   kube-apiserver-2-177            1/1     Running   0          47m
kube-system   kube-controller-manager-2-175   1/1     Running   4          36m
kube-system   kube-controller-manager-2-176   1/1     Running   1          34m
kube-system   kube-controller-manager-2-177   1/1     Running   1          33m
kube-system   kube-flannel-ds-amd64-5286c     1/1     Running   0          7m37s
kube-system   kube-flannel-ds-amd64-8wxzz     1/1     Running   0          7m37s
kube-system   kube-flannel-ds-amd64-j7c6t     1/1     Running   0          7m37s
kube-system   kube-flannel-ds-amd64-whhm9     1/1     Running   0          7m37s
kube-system   kube-flannel-ds-amd64-wlwvk     1/1     Running   0          7m37s
kube-system   kube-proxy-5fd2z                1/1     Running   0          64m
kube-system   kube-proxy-dsbqr                1/1     Running   0          66m
kube-system   kube-proxy-hz8fl                1/1     Running   0          12m
kube-system   kube-proxy-plb5j                1/1     Running   0          69m
kube-system   kube-proxy-zrhm7                1/1     Running   0          13m
kube-system   kube-scheduler-2-175            1/1     Running   3          36m
kube-system   kube-scheduler-2-176            1/1     Running   0          34m
kube-system   kube-scheduler-2-177            1/1     Running   3          33m
# 所有pod 已经正常

测试集群部署pod 网络是否正常,dns 解析是否正常,访问外网域名是否正常

cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: net-tools
  labels:
    k8s-app: net-tools
spec:
  selector:
    matchLabels:
      k8s-app: net-tools
  template:
    metadata:
      labels:
        k8s-app: net-tools
    spec:
      tolerations:
        - effect: NoSchedule
          operator: Exists
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      containers:
      - name: net-tools
        image: juestnow/net-tools
        command:
          - /bin/sh
          - ‘-c‘
          - set -e -x; tail -f /dev/null
        resources:
          limits:
            memory: 30Mi
          requests:
            cpu: 50m
            memory: 20Mi
      dnsConfig:
        options:
          - name: single-request-reopen
EOF

[email protected]:/mnt/g/work# kubectl get pod -o wide
NAME              READY   STATUS    RESTARTS   AGE     IP           NODE    NOMINATED NODE   READINESS GATES
net-tools-j8hsd   1/1     Running   0          3m14s   10.244.2.3   2-177   <none>           <none>
net-tools-jxfj2   1/1     Running   0          5m29s   10.244.3.6   2-185   <none>           <none>
net-tools-mzkwv   1/1     Running   0          7m27s   10.244.0.3   2-175   <none>           <none>
net-tools-w9j6h   1/1     Running   0          4m45s   10.244.1.3   2-176   <none>           <none>
net-tools-z5md7   1/1     Running   0          4m1s    10.244.4.4   2-187   <none>           <none>
# pod  dns 解析
[[email protected] manifests]# kubectl exec -ti net-tools-jxfj2 /bin/sh
/ #
/ #
/ #
/ # ping www.qq.com
PING www.qq.com (113.96.232.215): 56 data bytes
64 bytes from 113.96.232.215: seq=0 ttl=54 time=5.349 ms
64 bytes from 113.96.232.215: seq=1 ttl=54 time=5.556 ms
64 bytes from 113.96.232.215: seq=2 ttl=54 time=6.150 ms
# 解析正常可以正常上外网
ping   10.244.2.3
/ # ping   10.244.2.3
PING 10.244.2.3 (10.244.2.3): 56 data bytes
64 bytes from 10.244.2.3: seq=0 ttl=62 time=1.450 ms
64 bytes from 10.244.2.3: seq=1 ttl=62 time=0.707 ms
64 bytes from 10.244.2.3: seq=2 ttl=62 time=0.766 ms
--- 10.244.2.3 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.707/0.974/1.450 ms
# pod 直接网络正常

注意事项

#kubeadm token 默认时间是24 小时,过期记得从新生成token 然后加入节点
# 查看token
kubeadm   token list
# 创建token
kubeadm   token create
#忘记初始master节点时的node节点加入集群命令怎么办
# 简单方法
kubeadm token create --print-join-command
# 第二种方法
token=$(kubeadm token generate)
kubeadm token create $token --print-join-command --ttl=0
# 接下来就可以部署监控,第一个应用等等

原文地址:https://blog.51cto.com/juestnow/2481010

时间: 2024-10-24 12:23:46

kubeadm部署kubernetes v1.17.4 高可用master节点的相关文章

kubeadm部署kubernetes v1.17.4 单master节点

环境说明: #操作系统:centos7 #docker版本:19.03.8#kubernetes版本:v1.17.4#K8S master 节点IP:192.168.3.62#K8S worker节点IP:192.168.2.186#网络插件:flannel#kube-proxy网络转发: ipvs#kubernetes源:使用阿里云源#service-cidr:10.96.0.0/16 #pod-network-cidr:10.244.0.0/16 部署准备: 操作在所有节点进行 修改内核参数

Kubernetes 1.17.2 高可用部署

20.0.0.200    10.0.0.200 bs-k8s-master01 管理节点 2c2g 20.0.0.201    10.0.0.201 bs-k8s-master02 管理节点 2c2g 20.0.0.202    10.0.0.202 bs-k8s-master03 管理节点 2c2g 20.0.0.203    10.0.0.203 bs-k8s-node01 业务节点 2c2g 20.0.0.204    10.0.0.204 bs-k8s-node02 业务节点 2c2g

使用kubeadm部署K8S v1.17.0集群

kubeadm部署K8S集群 安装前的准备 集群机器 172.22.34.34 K8S00 172.22.34.35 K8S01 172.22.34.36 K8S02 注意: 本文档中的 etcd .master 节点.worker 节点均使用这三台机器: 需要使用 root 账号执行这些命令: 未做特殊说明,就表示集群的所有机器都要进行操作 查看CentOS版本 [[email protected] ~]# cat /etc/redhat-release CentOS Linux releas

kubeadm高可用master节点部署文档

kubeadm的标准部署里,etcd和master都是单节点的. 但上生产,至少得高可用. etcd的高可用,用kubeadm微微扩散一下就可以. 但master却官方没有提及. 于是搜索了几篇文档,过几天测试一下. ======================= http://www.cnblogs.com/caiwenhao/p/6196014.html http://tonybai.com/2017/05/15/setup-a-ha-kubernetes-cluster-based-on-

使用kubeadm快速部署Kubernetes(v1.12.1)集群---来源:马哥教育马哥原创

使用kubeadm快速部署Kubernetes(v1.12.1)集群------来源:马哥教育马哥原创 Kubernetes技术已经成为了原生云技术的事实标准,它是目前基础软件领域最为热门的分布式调度和管理平台.于是,Kubernetes也几乎成了时下开发工程师和运维工程师必备的技能之一. 一.主机环境预设 1.测试环境说明 测试使用的Kubernetes集群可由一个master主机及一个以上(建议至少两个)node主机组成,这些主机可以是物理服务器,也可以运行于vmware.virtualbo

基于kubernetes v1.17部署dashboard:v2.0-beta8

一.前言 Dashboard 是基于网页的 Kubernetes 用户界面.您可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源.您可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源(如 Deployment,Job,DaemonSet 等等).例如,您可以对 Deployment 实现弹性伸缩.发起滚动升级.重启 Pod 或者使用向导创建新的应用. 在部署完kubern

kubeadm部署kubernetes 1.12集群

kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践. 在Kubernetes的文档Creating a single master cluster with kubeadm中已经给出了目前kubeadm的主要特性已经处于beta状态了,在2018年将进入GA状态,说明kube

使用kubeadm部署kubernetes集群

使用kubeadm部署kubernetes集群 通过docker,我们可以在单个主机上快速部署各个应用,但是实际的生产环境里,不会单单存在一台主机,这就需要用到docker集群管理工具了,本文将简单介绍使用docker集群管理工具kubernetes进行集群部署. 1 环境规划与准备 本次搭建使用了三台主机,其环境信息如下:| 节点功能 | 主机名 | IP || ------|:------:|-------:|| master | master |192.168.1.11 || slave1

[转帖]CentOS 7 使用kubeadm 部署 Kubernetes

CentOS 7 使用kubeadm 部署 Kubernetes 关闭swap 执行swapoff临时关闭swap. 重启后会失效,若要永久关闭,可以编辑/etc/fstab文件,将其中swap分区一行注释掉 #/dev/mapper/centos-swap swap swap defaults 0 0 安装配置docker 可以参考官方安装文档 1. 安装docker $ yum install yum-utils device-mapper-persistent-data lvm2 $ yu