超简单kubernetes 高可用集群版安装

前置条件

  • 系统要求:64位centos7.6
  • 关闭防火墙和selinux
  • 关闭操作系统swap分区(使用k8s不推荐打开)
  • 请预配置好每个节点的hostname保证不重名即可
  • 请配置第一个master能秘钥免密登入所有节点(包括自身)

环境说明

本手册安装方式适用于小规模使用

多主模式(最少三个), 每个master节点上需要安装keepalived

准备工作(每个节点都需要执行)

Docker和kubernetes软件源配置
# 切换到配置目录
cd /etc/yum.repos.d/
# 配置docker-ce阿里源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 配置kubernetes阿里源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
配置内核相关参数
cat <<EOF >  /etc/sysctl.d/ceph.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
安装相应软件包
# 安装kubeadm kubelet kubectl
yum install kubeadm kubectl kubelet -y

# 开机启动kubelet和docker
systemctl enable docker kubelet

# 启动docker
systemctl start docker

部署

安装keepalived(在所有master上执行)
# 此处如果有Lb可省略 直接使用LB地址
# 安装时候请先在初始化master上执行,保证VIP附着在初始化master上,否则请关闭其他keepalived

# 安装完成后可根据自己业务需要实现健康监测
yum install keepalived -y

# 备份keepalived原始文件
mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

# 生成新的keepalived配置文件,文中注释部分对每台master请进行修改
cat <<EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id k8s-master1                      #主调度器的主机名
   vrrp_mcast_group4 224.26.1.1         

}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 66
    nopreempt
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        10.20.1.8                            #VIP地址声明
    }
}
EOF

# 配置keepalived开机启动和启动keepalived
systemctl enable keepalived
systemctl start keepalived
生成kubeadm master 配置文件
cd && cat <<EOF > kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
  certSANs:
  - "172.29.2.188"  #请求改为你的vip地址
controlPlaneEndpoint: "172.29.2.188:6443"  #请求改为你的vip地址
imageRepository: registry.cn-hangzhou.aliyuncs.com/peter1009
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
EOF
初始化第一个master
# 使用上一步生成的kubeadm.yaml
kubeadm init --config kubeadm.yaml
# 执行完上一步输出如下
[email protected]:~# kubeadm  init --config kubeadm.yaml
I0522 06:20:13.352644    2622 version.go:96] could not fetch a Kubernetes version from
......... 此处省略
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498     --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72     --experimental-control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498     --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72
安装集群
cat <<EOF > copy.sh
CONTROL_PLANE_IPS="172.16.10.101 172.16.10.102"  # 修改这两个ip地址为你第二/第三masterip地址
for host in ${CONTROL_PLANE_IPS}; do
    ssh $host mkdir -p /etc/kubernetes/pki/etcd
    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/ca.key "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/sa.key "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:/etc/kubernetes/pki/etcd/ca.crt
    scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:/etc/kubernetes/pki/etcd/ca.key
    scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
done
EOF

# 如果未配置免密登录,该步骤讲失败
bash -x copy.sh
# 在当前节点执行提示内容,使kubectl能访问集群
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 在其他master节点上配置执行提示内容(必须要copy.sh文件执行成功以后)
kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498     --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72     --experimental-control-plane
# 在其他非master的节点上配置执行提示内容
kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498     --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72
安装flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
检查是否安装完成
[email protected]:~# kubectl  get nodes
NAME   STATUS   ROLES    AGE   VERSION
k8s4   Ready    master   20m   v1.14.2
[email protected]:~# kubectl  get nodes
NAME   STATUS   ROLES    AGE   VERSION
k8s4   Ready    master   20m   v1.14.2
[email protected]:~# kubectl  get pods --all-namespaces
NAMESPACE     NAME                           READY   STATUS    RESTARTS   AGE
kube-system   coredns-8cc96f57d-cfr4j        1/1     Running   0          20m
kube-system   coredns-8cc96f57d-stcz6        1/1     Running   0          20m
kube-system   etcd-k8s4                      1/1     Running   0          19m
kube-system   kube-apiserver-k8s4            1/1     Running   0          19m
kube-system   kube-controller-manager-k8s4   1/1     Running   0          19m
kube-system   kube-flannel-ds-amd64-k4q6q    1/1     Running   0          50s
kube-system   kube-proxy-lhjsf               1/1     Running   0          20m
kube-system   kube-scheduler-k8s4            1/1     Running   0          19m
测试是否能正常使用集群
# 取消节点污点,使master能被正常调度, k8s4请更改为你自有集群的nodename
kubectl  taint node k8s4 node-role.kubernetes.io/master:NoSchedule-

# 创建nginx deploy
[email protected]:~# kubectl  create deploy nginx --image nginx
deployment.apps/nginx created

[email protected]:~# kubectl  get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-65f88748fd-9sk6z   1/1     Running   0          2m44s

# 暴露nginx到集群外
[email protected]:~# kubectl  expose deploy nginx --port=80 --type=NodePort
service/nginx exposed
[email protected]:~# kubectl  get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        25m
nginx        NodePort    10.104.109.234   <none>        80:32129/TCP   5s
[email protected]:~# curl 127.0.0.1:32129
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

原文地址:https://blog.51cto.com/linuxmaizi/2419889

时间: 2024-11-05 20:31:57

超简单kubernetes 高可用集群版安装的相关文章

简单的高可用集群实验

前言: 上文介绍了高可用集群的基本概念,下面让我们来试试用两台提供web服务的虚拟机来实现一个小小的高可用集群吧- 首先,配置一个高可用集群的前提有: 1.至少两个节点: 2.共享存储(为了提供的页面一致,需要在后端用某些机制来实现.这里我们就做个简单的实验,后端存储先不考虑.先让这两个节点各自提供一个不同的页面,方便我们知道哪台在提供服务): 3.STONIN(共享存储时为了避免争用资源,指挥特定的设备,如电源交换机等避免共享存储的崩溃,所以一般的集群都得需要这个否则它会不工作的哦~~这里我们

红帽436——HA高可用集群之安装篇

红帽436--HA高可用集群的安装 图释: 1-  su - :真实机切换至root用户 2-  virt-manager :打开KVM管理工具 3-  从Desktop开启虚拟机:classroom(充当服务器的作用)和三台节点机 图释:在每台节点中操作 1-  通过ssh以节点root远程登录三个节点进行操作: ssh [email protected] 2-  安装pcs服务 3-  关闭firewall,并永久启用pcs服务 4-  设置用户hacluster密码:redhat    -

大数据高可用集群环境安装与配置(09)——安装Spark高可用集群

1. 获取spark下载链接 登录官网:http://spark.apache.org/downloads.html 选择要下载的版本 2. 执行命令下载并安装 cd /usr/local/src/ wget http://mirrors.tuna.tsinghua.edu.cn/apache/spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz tar -zxvf spark-2.4.4-bin-hadoop2.7.tgz mv spark-2.4.4

kubespray容器化部署kubernetes高可用集群

一.基础环境 docker版本1.12.6 CentOS 7 1.准备好要部署的机器 IP ROLE 172.30.33.89 k8s-registry-lb 172.30.33.90 k8s-master01-etcd01 172.30.33.91 k8s-master02-etcd02 172.30.33.92 k8s-master03-etcd03 172.30.33.93 k8s-node01-ingress01 172.30.33.94 k8s-node02-ingress02 172

创建简单WEB高可用集群

环境介绍 node1:192.168.168.201 node2:192.168.168.202 1.配置主机名 [[email protected] ~]# vim /etc/sysconfig/network #编辑节点1主机名配置文件 #编辑 HOSTNAME=node1.linuxpanda.com [[email protected] ~]# hostname node1.linuxpanda.com #主机名立即生效 [[email protected] ~]# vim /etc/s

高可用集群heartbeat安装配置(一)

一.HA高可 FailOver:故障转移 包含HA Resource IP, service,STONITH FailBack故障转移原点 Faiover domain:故障转移域 资源粘性资源更倾向于运行于哪个节点 Messagin Layer:集群事务信息层仅用来传递信息并不负责后期信息计算与比较 CRM:claster resource meanager 集群资源管理器负责统计收集集群上每一个资源状态根据资源状态资源服务本身计算出应该运行在哪个节点上. DC:Desinated Coord

大数据高可用集群环境安装与配置(06)——安装Hadoop高可用集群

下载Hadoop安装包 登录 https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/ 镜像站,找到我们要安装的版本,点击进去复制下载链接 安装Hadoop时要注意版本与后续安装的HBase.Spark等相关组件的兼容,不要安装了不匹配的版本,而导致某些组件需要重装 输入命令进行安装操作 cd /usr/local/src/ wget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/

最新Hadoop-2.7.2+hbase-1.2.0+zookeeper-3.4.8 HA高可用集群配置安装

Ip 主机名 程序 进程 192.168.128.11 h1 Jdk Hadoop hbase Namenode DFSZKFailoverController Hamster 192.168.128.12 h2 Jdk Hadoop hbase Namenode DFSZKFailoverController Hamster 192.168.128.13 h3 Jdk Hadoop resourceManager 192.168.128.14 h4 Jdk Hadoop resourceMan

大数据高可用集群环境安装与配置(07)——安装HBase高可用集群

1. 下载安装包 登录官网获取HBase安装包下载地址 https://hbase.apache.org/downloads.html 2. 执行命令下载并安装 cd /usr/local/src/ wget http://mirrors.tuna.tsinghua.edu.cn/apache/hbase/2.1.8/hbase-2.1.8-bin.tar.gz tar -zxvf hbase-2.1.8-bin.tar.gz mv hbase-2.1.8 /usr/local/hbase/ 3