kubeadm安装kubernetes v1.11.3 HA多主高可用并启用ipvs

环境介绍:

系统版本:CentOS 7.5
内核:4.18.7-1.el7.elrepo.x86_64
Kubernetes: v1.11.3
Docker-ce: 18.06

Keepalived保证apiserever服务器的IP高可用
Haproxy实现apiserver的负载均衡
master x3 && etcd x3 保证k8s集群可用性

192.168.1.1        master
192.168.1.2        master2
192.168.1.3        master3
192.168.1.4        Keepalived + Haproxy
192.168.1.5        Keepalived + Haproxy
192.168.1.6        etcd1
192.168.1.7        etcd2
192.168.1.8        etcd3
192.168.1.9        node1
192.168.1.10       node2
192.168.1.100      VIP、apiserver的地址

有道笔记原文:http://note.youdao.com/noteshare?id=cd79131892c3a5bdae220d6fd8013555&sub=0687104101804B26AC12AE423C7E13E6

一、准备工作

为方便操作,所有操作均以root用户执行
以下操作仅在kubernetes集群节点执行即可

  • 关闭selinux和防火墙
sed -ri ‘s#(SELINUX=).*#\1disabled#‘ /etc/selinux/config
setenforce 0

systemctl disable firewalld
systemctl stop firewalld
  • 关闭swap
swapoff -a
  • 配置转发相关参数,否则可能会出错
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF

sysctl --system
  • 加载ipvs模块
cat << EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules_dir="/usr/lib/modules/\`uname -r\`/kernel/net/netfilter/ipvs"
for i in \`ls \$ipvs_modules_dir | sed  -r ‘s#(.*).ko.*#\1#‘\`; do
    /sbin/modinfo -F filename \$i  &> /dev/null
    if [ \$? -eq 0 ]; then
        /sbin/modprobe \$i
    fi
done
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
  • 安装cfssl
#在master节点安装即可!!!

wget -O /bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget -O /bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget -O /bin/cfssl-certinfo  https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
for cfssl in `ls /bin/cfssl*`;do chmod +x $cfssl;done;
  • 安装kubernetes阿里云镜像
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y  kubelet kubeadm kubectl
  • 安装docker,并干掉docker0网桥
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install -y docker-ce

mkdir /etc/docker/
cat << EOF > /etc/docker/daemon.json
{   "registry-mirrors": ["https://registry.docker-cn.com"],
    "live-restore": true,
    "default-shm-size": "128M",
    "bridge": "none",
    "max-concurrent-downloads": 10,
    "oom-score-adjust": -1000,
    "debug": false
}
EOF 

#重启docker
systemctl daemon-reload
systemctl enable docker
systemctl restart docker

#可以忽略这步,后面kubeadm可以指定从阿里云的镜像仓库中下载所需的image
#设置docker代理,以下载k8s所需要的images
mkdir /etc/systemd/system/docker.service.d/
cat << EOF > /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.0.6:1080/" "HTTPS_PROXY=http://192.168.0.6:1080/ "
Environment="NO_PROXY=localhost,127.0.0.1,1ti39uv1.mirror.aliyuncs.com,acs-cn-hangzhou-mirror.oss-cn-hangzhou.aliyuncs.com"
EOF

#重启docker
systemctl daemon-reload
systemctl restart docker
  • 配置hosts文件
#为所有节点配置hosts文件

192.168.1.1    master
192.168.1.2    master2
192.168.1.3    master3
192.168.1.4    lb1
192.168.1.5    lb2
192.168.1.6    etcd1
192.168.1.7    etcd2
192.168.1.8    etcd3
192.168.1.9    node1
192.168.1.10   node2

二、配置etcd

  • 配置etcd的证书
mkdir -pv $HOME/ssl && cd $HOME/ssl

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF

cat > etcd-ca-csr.json << EOF
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shenzhen",
      "L": "Shenzhen",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ]
}
EOF

cat > etcd-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
      "127.0.0.1",
      "192.168.1.6",
      "192.168.1.7",
      "192.168.1.8"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shenzhen",
            "L": "Shenzhen",
            "O": "etcd",
            "OU": "Etcd Security"
        }
    ]
}
EOF

#生成证书并复制证书至其他etcd节点

cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca
cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

mkdir -pv /etc/etcd/ssl
mkdir -pv /etc/kubernetes/pki/etcd
cp etcd*.pem /etc/etcd/ssl
cp etcd*.pem /etc/kubernetes/pki/etcd

scp -r /etc/etcd 192.168.1.6:/etc/
scp -r /etc/etcd 192.168.1.7:/etc/
scp -r /etc/etcd 192.168.1.8:/etc/
  • etcd1主机启动etcd
yum install -y etcd 

cat << EOF > /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="https://192.168.1.6:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.6:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="etcd1"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.6:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.6:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.6:2380,etcd2=https://192.168.1.7:2380,etcd3=https://192.168.1.8:2380"
ETCD_INITIAL_CLUSTER_TOKEN="BigBoss"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_CLIENT_CERT_AUTH="false"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_AUTO_TLS="false"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_PEER_CLIENT_CERT_AUTH="false"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"
EOF

chown -R etcd.etcd /etc/etcd
systemctl enable etcd
systemctl start etcd
  • etcd2主机启动etcd
yum install -y etcd 

cat << EOF > /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="https://192.168.1.7:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.7:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="etcd2"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.7:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.7:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.6:2380,etcd2=https://192.168.1.7:2380,etcd3=https://192.168.1.8:2380"
ETCD_INITIAL_CLUSTER_TOKEN="BigBoss"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_CLIENT_CERT_AUTH="false"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_AUTO_TLS="false"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_PEER_CLIENT_CERT_AUTH="false"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"
EOF

chown -R etcd.etcd /etc/etcd
systemctl enable etcd
systemctl start etcd
  • etcd3主机启动etcd
yum install -y etcd 

cat << EOF > /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="https://192.168.1.8:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.8:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="etcd3"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.8:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.8:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.6:2380,etcd2=https://192.168.1.7:2380,etcd3=https://192.168.1.8:2380"
ETCD_INITIAL_CLUSTER_TOKEN="BigBoss"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_CLIENT_CERT_AUTH="false"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_AUTO_TLS="false"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_PEER_CLIENT_CERT_AUTH="false"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"
EOF

chown -R etcd.etcd /etc/etcd
systemctl enable etcd
systemctl start etcd
  • 检查etcd集群
etcdctl --endpoints "https://192.168.1.6:2379,https://192.168.1.7:2379,https://192.168.1.8:2379"   --ca-file=/etc/etcd/ssl/etcd-ca.pem  --cert-file=/etc/etcd/ssl/etcd.pem   --key-file=/etc/etcd/ssl/etcd-key.pem   cluster-health

[[email protected] ~]# etcdctl --endpoints "https://192.168.1.6:2379,https://192.168.1.7:2379,https://192.168.1.8:2379"   --ca-file=/etc/etcd/ssl/etcd-ca.pem  > --cert-file=/etc/etcd/ssl/etcd.pem   --key-file=/etc/etcd/ssl/etcd-key.pem   cluster-health
member 3639deb1869a1bda is healthy: got healthy result from https://127.0.0.1:2379
member b75e13f1faa57bd8 is healthy: got healthy result from https://127.0.0.1:2379
member e31fec5bb4c882f2 is healthy: got healthy result from https://127.0.0.1:2379

配置keepalived

  • 在lb1机器上配置
yum install -y keepalived

cat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived
    global_defs {
        notification_email {
            [email protected]      #发送邮箱
        }
        notification_email_from [email protected]    #邮箱地址
        smtp_server 127.0.0.1   #邮件服务器地址
        smtp_connect_timeout 30
        router_id node1         #主机名,每个节点不同即可
        vrrp_mcast_group4 224.0.100.100    #组播地址
    }       

vrrp_instance VI_1 {
    state MASTER        #在另一个节点上为BACKUP
    interface eth0      #IP地址漂移到的网卡
    virtual_router_id 6 #多个节点必须相同
    priority 100        #优先级,备用节点的值必须低于主节点的值
    advert_int 1        #通告间隔1秒
    authentication {
        auth_type PASS      #预共享密钥认证
        auth_pass 571f97b2  #密钥
    }
    virtual_ipaddress {
        192.168.1.100/24    #VIP地址
    }
}
EOF

systemctl enable keepalived
systemctl start keepalived
  • 在lb2主机配置
yum install -y keepalived

cat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived
    global_defs {
        notification_email {
            [email protected]      #发送邮箱
        }
        notification_email_from [email protected]    #邮箱地址
        smtp_server 127.0.0.1   #邮件服务器地址
        smtp_connect_timeout 30
        router_id node2         #主机名,每个节点不同即可
        vrrp_mcast_group4 224.0.100.100  #组播地址
    }       

vrrp_instance VI_1 {
    state BACKUP        #在另一个节点上为MASTER
    interface eth0      #IP地址漂移到的网卡
    virtual_router_id 6 #多个节点必须相同
    priority 80        #优先级,备用节点的值必须低于主节点的值
    advert_int 1        #通告间隔1秒
    authentication {
        auth_type PASS      #预共享密钥认证
        auth_pass 571f97b2  #密钥
    }
    virtual_ipaddress {
        192.168.1.100/24    #漂移过来的IP地址
    }
}
EOF

systemctl enable keepalived
systemctl start keepalived

配置Haproxy

  • 在lb1主机上
yum install  -y haproxy

cat << EOF > /etc/haproxy/haproxy.cfg
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

defaults
    mode                    tcp
    log                     global
    retries                 3
    timeout connect         10s
    timeout client          1m
    timeout server          1m

frontend kubernetes
    bind *:6443
    mode tcp
    default_backend kubernetes-master

backend kubernetes-master
    balance roundrobin
    server master  192.168.1.1:6443 check maxconn 2000
    server master2 192.168.1.2:6443 check maxconn 2000
    server master3 192.168.1.3:6443 check maxconn 2000
EOF

systemctl enable haproxy
systemctl start haproxy
  • 在lb2主机上
yum install  -y haproxy

cat << EOF > /etc/haproxy/haproxy.cfg
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

defaults
    mode                    tcp
    log                     global
    retries                 3
    timeout connect         10s
    timeout client          1m
    timeout server          1m

frontend kubernetes
    bind *:6443
    mode tcp
    default_backend kubernetes-master

backend kubernetes-master
    balance roundrobin
    server master  192.168.1.1:6443 check maxconn 2000
    server master2 192.168.1.2:6443 check maxconn 2000
    server master3 192.168.1.3:6443 check maxconn 2000
EOF

systemctl enable haproxy
systemctl start haproxy

初始化master

  • 初始化master1
#kubeadm init配置文件参考:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file

cd $HOME
cat << EOF > /root/kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.3      # kubernetes的版本
api:
  advertiseAddress: 192.168.1.1
  bindPort: 6443
  controlPlaneEndpoint: 192.168.1.100:6443   #VIP地址
apiServerCertSANs:              #此处填所有的masterip和lbip和其它你可能需要通过它访问apiserver的地址和域名或者主机名等
- master
- master2
- master3
- 192.168.1.1
- 192.168.1.2
- 192.168.1.3
- 192.168.1.4
- 192.168.1.5
- 192.168.1.100
- 127.0.0.1
etcd:    #ETCD的地址
  external:
    endpoints:
    - "https://192.168.1.6:2379"
    - "https://192.168.1.7:2379"
    - "https://192.168.1.8:2379"
    caFile: /etc/kubernetes/pki/etcd/etcd-ca.pem
    certFile: /etc/kubernetes/pki/etcd/etcd.pem
    keyFile: /etc/kubernetes/pki/etcd/etcd-key.pem
networking:
  podSubnet: 10.244.0.0/16      # pod网络的网段
kubeProxy:
  config:
    mode: ipvs   #启用IPVS模式
featureGates:
  CoreDNS: true
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers  # image的仓库源
EOF

systemctl enable kubelet
kubeadm config images pull --config kubeadm-init.yaml
docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1  k8s.gcr.io/pause:3.1
kubeadm init --config /root/kubeadm-init.yaml

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

cat << EOF > /etc/profile.d/kubernetes.sh
source <(kubectl completion bash)
EOF
source /etc/profile.d/kubernetes.sh 

scp -r /etc/kubernetes/pki 192.168.1.2:/etc/kubernetes/
scp -r /etc/kubernetes/pki 192.168.1.3:/etc/kubernetes/
  • 初始化master2
cd /etc/kubernetes/pki/
rm -fr apiserver.crt apiserver.key
cd $HOME
cat << EOF > /root/kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.3      # kubernetes的版本
api:
  advertiseAddress: 192.168.1.2
  bindPort: 6443
  controlPlaneEndpoint: 192.168.1.100:6443
apiServerCertSANs:      #此处填所有的masterip和lbip和其它你可能需要通过它访问apiserver的地址和域名或者主机名等
- master
- master2
- master3
- 192.168.1.1
- 192.168.1.2
- 192.168.1.3
- 192.168.1.4
- 192.168.1.5
- 192.168.1.100
- 127.0.0.1
etcd:
  external:
    endpoints:
    - "https://192.168.1.6:2379"
    - "https://192.168.1.7:2379"
    - "https://192.168.1.8:2379"
    caFile: /etc/kubernetes/pki/etcd/etcd-ca.pem
    certFile: /etc/kubernetes/pki/etcd/etcd.pem
    keyFile: /etc/kubernetes/pki/etcd/etcd-key.pem
networking:
  podSubnet: 10.244.0.0/16      # pod网络的网段
kubeProxy:
  config:
    mode: ipvs
featureGates:
  CoreDNS: true
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers  # image的仓库源
EOF

systemctl enable kubelet
kubeadm config images pull --config kubeadm-init.yaml
docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1  k8s.gcr.io/pause:3.1
kubeadm init --config /root/kubeadm-init.yaml

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

cat << EOF > /etc/profile.d/kubernetes.sh
source <(kubectl completion bash)
EOF
source /etc/profile.d/kubernetes.sh
  • 初始化master3
cd /etc/kubernetes/pki/
rm -fr apiserver.crt apiserver.key
cd $HOME
cat << EOF > /root/kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.3      # kubernetes的版本
api:
  advertiseAddress: 192.168.1.3
  bindPort: 6443
  controlPlaneEndpoint: 192.168.1.100:6443
apiServerCertSANs:       #此处填所有的masterip和lbip和其它你可能需要通过它访问apiserver的地址和域名或者主机名等
- master
- master2
- master3
- 192.168.1.1
- 192.168.1.2
- 192.168.1.3
- 192.168.1.4
- 192.168.1.5
- 192.168.1.100
- 127.0.0.1
etcd:
  external:
    endpoints:
    - "https://192.168.1.6:2379"
    - "https://192.168.1.7:2379"
    - "https://192.168.1.8:2379"
    caFile: /etc/kubernetes/pki/etcd/etcd-ca.pem
    certFile: /etc/kubernetes/pki/etcd/etcd.pem
    keyFile: /etc/kubernetes/pki/etcd/etcd-key.pem
networking:
  podSubnet: 10.244.0.0/16      # pod网络的网段
kubeProxy:
  config:
    mode: ipvs
featureGates:
  CoreDNS: true
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers  # image的仓库源
EOF

systemctl enable kubelet
kubeadm config images pull --config kubeadm-init.yaml
docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1  k8s.gcr.io/pause:3.1
kubeadm init --config /root/kubeadm-init.yaml

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

cat << EOF > /etc/profile.d/kubernetes.sh
source <(kubectl completion bash)
EOF
source /etc/profile.d/kubernetes.sh

将所有node节点加入集群

  • 获取加入集群的token
#在master主机执行获取join命令
kubeadm token create --print-join-command

[[email protected] ~]# kubeadm token create --print-join-command
kubeadm join 192.168.1.100:6443 --token zpru0r.jkvrdyy2caexr8kk --discovery-token-ca-cert-hash sha256:a45c091dbd8a801152aacd877bcaaaaf152697bfa4536272c905a83612b3bf22
  • 在所有node执行
systemctl enable kubelet.service

kubeadm join 192.168.1.100:6443 --token zpru0r.jkvrdyy2caexr8kk --discovery-token-ca-cert-hash sha256:a45c091dbd8a801152aacd877bcaaaaf152697bfa4536272c905a83612b3bf22

[[email protected] ~]# kubeadm join 192.168.1.100:6443 --token zpru0r.jkvrdyy2caexr8kk --discovery-token-ca-cert-hash sha256:a45c091dbd8a801152aacd877bcaaaaf152697bfa4536272c905a83612b3bf22
[preflight] running pre-flight checks
I0913 15:33:17.429069    1907 kernel_validator.go:81] Validating kernel version
I0913 15:33:17.429335    1907 kernel_validator.go:96] Validating kernel config
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "192.168.1.100:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.100:6443"
[discovery] Requesting info from "https://192.168.1.100:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.100:6443"
[discovery] Successfully established connection with API Server "192.168.1.100:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node6" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes‘ on the master to see this node join the cluster.
[[email protected] ~]#
  • 查看节点
#在master上执行
kubectl get node

[[email protected] ~]# kubectl get node
NAME      STATUS     ROLES     AGE       VERSION
master    NotReady   master    2m        v1.11.3
master2   NotReady   master    1m        v1.11.3
master3   NotReady   master    1m        v1.11.3
node1     NotReady   <none>    18s       v1.11.3
node2     NotReady   <none>    12s       v1.11.3

配置网络

  • 使用flannel网络
cd /root/
mkdir flannel
cd flannel
wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml

#因为是拉去google上的image,可能会拉不下来,自己想办法吧~~
  • 查看node状态
kubectl get pod -n kube-system
kubectl get node

[[email protected] ~]# kubectl get pod -n kube-system
NAME                              READY     STATUS    RESTARTS   AGE
coredns-777d78ff6f-c5b9h          1/1       Running   0          27m
coredns-777d78ff6f-fv4fw          1/1       Running   0          27m
kube-apiserver-master             1/1       Running   0          26m
kube-apiserver-master2            1/1       Running   0          26m
kube-apiserver-master3            1/1       Running   0          25m
kube-controller-manager-master    1/1       Running   0          26m
kube-controller-manager-master2   1/1       Running   0          26m
kube-controller-manager-master3   1/1       Running   0          25m
kube-flannel-ds-4hd6r             1/1       Running   0          9m
kube-flannel-ds-g9tvn             1/1       Running   0          9m
kube-flannel-ds-gnrlc             1/1       Running   0          9m
kube-flannel-ds-kkswt             1/1       Running   0          9m
kube-flannel-ds-n7sqv             1/1       Running   2          9m
kube-proxy-7fpbb                  1/1       Running   0          25m
kube-proxy-89g7s                  1/1       Running   0          26m
kube-proxy-b8glx                  1/1       Running   0          27m
kube-proxy-c6qj7                  1/1       Running   0          26m
kube-proxy-xn4k7                  1/1       Running   0          25m
kube-scheduler-master             1/1       Running   0          26m
kube-scheduler-master2            1/1       Running   0          26m
kube-scheduler-master3            1/1       Running   0          25m

#当上面的kube-flannel-ds-xxxx的容器都处于Running状态时,node的状态应该是Ready

[[email protected] ~]# kubectl get node
NAME      STATUS    ROLES     AGE       VERSION
master    Ready     master    32m       v1.11.3
master2   Ready     master    31m       v1.11.3
master3   Ready     master    31m       v1.11.3
node1     Ready     <none>    30m       v1.11.3
node2     Ready     <none>    30m       v1.11.3

测试

  • 创建一个nginx,测试应用和dns是否正常
cd /root && mkdir nginx && cd nginx

cat << EOF > nginx.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  type: NodePort
  ports:
  - port: 80
    nodePort: 31000
    name: nginx-port
    targetPort: 80
    protocol: TCP

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
EOF

kubectl apply -f nginx.yaml
  • 创建一个POD来测试DNS解析
kubectl run curl --image=radial/busyboxplus:curl -i --tty
nslookup kubernetes
nslookup nginx
curl nginx
exit
kubectl delete deployment curl

[[email protected] nginx]# kubectl run curl --image=radial/busyboxplus:curl -i --tty
If you don‘t see a command prompt, try pressing enter.
nslookup kubernetes
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
[ [email protected]:/ ]$ nslookup nginx
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      nginx
Address 1: 10.103.202.146 nginx.default.svc.cluster.local
[ [email protected]:/ ]$ curl nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[ [email protected]:/ ]$
  • 测试master高可用
#将master:192.168.1.1关掉

init 0

#切换至master2
#执行get node

kubectl get node

#master已经宕机了!!!!
[[email protected] ~]# kubectl get node
NAME      STATUS     ROLES     AGE       VERSION
master    NotReady   master    1h        v1.11.3
master2   Ready      master    59m       v1.11.3
master3   Ready      master    59m       v1.11.3
node1     Ready      <none>    58m       v1.11.3
node2     Ready      <none>    58m       v1.11.3

#重新创建一个pod,看看是否能创建成功
kubectl run curl --image=radial/busyboxplus:curl -i --tty
exit
kubectl delete deployment curl

[[email protected] ~]# kubectl run curl --image=radial/busyboxplus:curl -i --tty
If you don‘t see a command prompt, try pressing enter.
[ [email protected]:/ ]$
[ [email protected]:/ ]$
[ [email protected]:/ ]$
[ [email protected]:/ ]$
[ [email protected]:/ ]$ date
Thu Sep 13 09:41:31 UTC 2018
  • 测试Haproxy高可用
#抓个包看看现在VIP在哪台机器上,然后去关掉这台机器

tcpdump -nn host 224.0.100.100

[[email protected] ~]# tcpdump -nn host 224.0.100.100
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
17:45:59.768033 IP 192.168.1.4 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 100, authtype simple, intvl 1s, length 20
17:46:00.769503 IP 192.168.1.4 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 100, authtype simple, intvl 1s, length 20
17:46:01.771062 IP 192.168.1.4 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 100, authtype simple, intvl 1s, length 20
^C

#现在VIP是在192.168.1.4这台主机上,关掉这台机器
init 0

#关掉之后发现192.168.1.5立马接管了。现在VIP是在1.5上
[[email protected] ~]# tcpdump -nn host 224.0.100.100
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
17:48:25.031679 IP 192.168.1.4 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 100, authtype simple, intvl 1s, length 20
17:48:26.033805 IP 192.168.1.4 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 100, authtype simple, intvl 1s, length 20
17:48:27.035313 IP 192.168.1.4 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 100, authtype simple, intvl 1s, length 20
17:48:28.036628 IP 192.168.1.4 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 100, authtype simple, intvl 1s, length 20
17:48:29.039011 IP 192.168.1.4 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 100, authtype simple, intvl 1s, length 20
17:48:30.041249 IP 192.168.1.4 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 100, authtype simple, intvl 1s, length 20
17:48:31.043065 IP 192.168.1.4 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 100, authtype simple, intvl 1s, length 20
17:48:32.045007 IP 192.168.1.4 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 100, authtype simple, intvl 1s, length 20
17:48:33.046781 IP 192.168.1.4 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 100, authtype simple, intvl 1s, length 20
17:48:34.048776 IP 192.168.1.4 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 100, authtype simple, intvl 1s, length 20
17:48:35.051280 IP 192.168.1.4 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 100, authtype simple, intvl 1s, length 20
17:48:35.929482 IP 192.168.1.4 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 0, authtype simple, intvl 1s, length 20
17:48:36.618749 IP 192.168.1.5 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 80, authtype simple, intvl 1s, length 20
17:48:37.699849 IP 192.168.1.5 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 80, authtype simple, intvl 1s, length 20
17:48:38.700669 IP 192.168.1.5 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 80, authtype simple, intvl 1s, length 20
17:48:39.702840 IP 192.168.1.5 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 80, authtype simple, intvl 1s, length 20
17:48:40.704254 IP 192.168.1.5 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 80, authtype simple, intvl 1s, length 20
17:48:41.706221 IP 192.168.1.5 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 80, authtype simple, intvl 1s, length 20
17:48:42.706478 IP 192.168.1.5 > 224.0.100.100: VRRPv2, Advertisement, vrid 6, prio 80, authtype simple, intvl 1s, length 20
^C
19 packets captured
326 packets received by filter
172 packets dropped by kernel

#切换到master2,再创建个POD试试
kubectl run curl --image=radial/busyboxplus:curl -i --tty
date
exit
kubectl delete deployment curl

[[email protected] ~]# kubectl run curl --image=radial/busyboxplus:curl -i --tty
If you don‘t see a command prompt, try pressing enter.
[ [email protected]:/ ]$ date
Thu Sep 13 09:50:58 UTC 2018
[ [email protected]:/ ]$
[ [email protected]:/ ]$ exit
Session ended, resume using ‘kubectl attach curl-87b54756-xfgrn -c curl -i -t‘ command when the pod is running
[[email protected] ~]# kubectl delete deployment curl
deployment.extensions "curl" deleted

原文地址:http://blog.51cto.com/bigboss/2174899

时间: 2024-10-11 00:29:35

kubeadm安装kubernetes v1.11.3 HA多主高可用并启用ipvs的相关文章

centos7使用kubeadm安装kubernetes 1.11版本多主高可用

centos7使用kubeadm安装kubernetes 1.11版本多主高可用 [TOC] kubernetes介绍要学习一个新的东西,先了解它是什么,熟悉基本概念会有很大帮助.以下是我学习时看过的一篇核心概念介绍.http://dockone.io/article/932 搭建Kubernetes集群环境有以下3种方式: minikubeMinikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用.不能用于生产环境.官方地址:ht

Sealos安装Kubernetes v1.16.0 HA集群

Sealos安装Kubernetes v1.16.0 HA集群 github项目链接https://github.com/fanux/sealos 初始化master节点与worker节点 初始化脚本 init.sh #!/bin/bash # 在 master 节点和 worker 节点都要执行 # 安装 docker # 参考文档如下 # https://docs.docker.com/install/linux/docker-ce/centos/ # https://docs.docker

使用kubeadm安装kubernetes v1.14.1

使用kubeadm安装kubernetes v1.14.1 一.环境准备 操作系统:Centos 7.5 ? ? 一台或多台运?行行着下列列系统的机器?: ? Ubuntu 16.04+ ? Debian 9 ? CentOS 7 ? RHEL 7 ? Fedora 25/26 (尽?力力服务) ? HypriotOS v1.0.1+ ? Container Linux (针对1800.6.0 版本测试) ? 每台机器? 2 GB 或更更多的 RAM (如果少于这个数字将会影响您应?用的运?行行

二进制安装kubernetes v1.11.2 (第十四章 kube-proxy部署)

继续前一章的部署. 部署 kube-proxy 组件 14.1 下载和分发二进制文件,参考 第三章 分发到各节点 source /opt/k8s/bin/environment.sh for node_name in ${NODE_NAMES[@]} do echo ">>> ${node_name}" scp k8s/v1.11.2/server/kubernetes/server/bin/{kube-proxy,kubelet} [email protected]

二进制安装kubernetes v1.11.2 (第一章 集群信息和初始化)

介绍 之前部署过单master节点的环境,并且没有启用证书.这次准备部署高可用的master节点并启用证书和部署dashboard. 一.集群环境信息 1.1 设备信息(使用virtualbox虚拟出了2个master和2个node设备) 192.168.56.20 k8s-m1 192.168.56.21 k8s-m2 192.168.56.30 k8s-n1 192.168.56.31 k8s-n2 内存不小于1G内核版本不小于 3.10.0 1.2 软件版本 kubernetes-v1.1

二进制安装kubernetes v1.11.2 (第七章 部署高可用组件)

继续上一章部署. 八.部署高可用组件 本章介绍keepalived和haproxy实现kube-apiserver高可用. keepalive 提供 kube-apiserver 对外提供服务的VIP: haproxy 监听VIP,后端链接所有 kube-apiserver 实例,提供健康检查和负载均衡功能 keepalived 一主一备的运行模式,本文档复用 master 节点的两台设备 haproxy 监听 8443 端口,与 kube-apiserver 的 6443 端口区分开 keep

kubeadm安装Kubernetes V1.10集群详细文档

1:服务器信息以及节点介绍 系统信息:centos1708 minimal    只修改IP地址 主机名称 IP 备注 node01 192.168.150.181 master and etcd rode02 192.168.150.182 master and etcd node03 192.168.150.183 master and etcd node04 192.168.150.184 node VIP 192.168.150.186 软件版本: docker17.03.2-ce so

二进制安装kubernetes v1.11.2 (第十一章 node节点部署)

继续前一章部署. 十一.node节点部署 kubernetes node 节点运行了如下组件: flannel docker kubelet kube-proxy 11.1 部署flanneld 请参考 第五章 11.2 安装依赖包 centos: source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh [email protected]

二进制安装kubernetes v1.11.2 (公有云虚机)

kubernetes搭建在公有云,集群vip可以用使用LB实现,注意下面几点 1.创建LB,监听端口指向api-server的监听端口,假设是6443 2.关闭LB的健康检查 3.修改/home/k8s/.kube/config 中的集群地址为 https://master_vip:6443 否则会出现 error: the server doesn't have a resource type "all" 原文地址:https://www.cnblogs.com/aast/p/102