kubernetes 1.8 高可用安装(三)

3、master 组件安装(etcd/api-server/controller/scheduler)

3.1 etcd集群安装

确定你要安装的master机器, 上面安装rpm包,配置kubelet

注意:

所有的image,我都已经放到docker hub仓库,需要的可以去下载

https://hub.docker.com/u/foxchan/

安装rpm包

yum localinstall -y kubectl-1.8.0-1.x86_64.rpm kubelet-1.8.0-1.x86_64.rpm kubernetes-cni-0.5.1-1.x86_64.rpm

创建manitest目录

mkdir -p /etc/kubernetes/manifests

修改kubelet配置
/etc/systemd/system/kubelet.service.d/kubelet.conf

[Service]
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.12 --cluster-domain=cluster.local"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
Environment="KUBELET_EXTRA_ARGS=--v=2  --pod-infra-container-image=foxchan/google_containers/pause-amd64:3.0 --fail-swap-on=false"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS  $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS

注意:

--cluster-dns=10.96.0.12    这个IP自己规划,记得和创建证书时候的IP段保持一致

--fail-swap-on=false             1.8开始,如果机器开启了swap,kubulet会无法启动,默认参数是true

启动kubelet

systemctl daemon-reload
systemctl restart kubelet


3.2 安装etcd集群

创建etcd.yaml,并放到 /etc/kubernetes/manifests

注意:
提前创建日志文件,便于挂载
/var/log/kube-apiserver.log
/var/log/kube-etcd.log
/var/log/kube-controller-manager.log
/var/log/kube-scheduler.log

#根据挂载配置创建相关目录
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd-server
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - image: foxchan/google_containers/etcd-amd64:3.0.17
    name: etcd-container
    command:
    - /bin/sh
    - -c
    - /usr/local/bin/etcd
      --name=etcd0
      --initial-advertise-peer-urls=http://master_IP:2380
      --listen-peer-urls=http://master_IP:2380
      --advertise-client-urls=http://master_IP:2379
      --listen-client-urls=http://master_IP:2379,http://127.0.0.1:2379
      --data-dir=/var/etcd/data
      --initial-cluster-token=emar-etcd-cluster
      --initial-cluster=etcd0=http://master_IP1:2380,etcd1=http://master_IP2:2380,etcd2=http://master_IP3:2380
      --initial-cluster-state=new 1>>/var/log/kube-etcd.log 2>&1
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /health
        port: 2379
        scheme: HTTP
      initialDelaySeconds: 15
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /var/log/kube-etcd.log
      name: logfile
    - mountPath: /var/etcd
      name: varetcd
    - mountPath: /etc/ssl/certs
      name: certs
    - mountPath: /etc/kubernetes/
      name: k8s
      readOnly: true
  volumes:
  - hostPath:
      path: /var/log/kube-etcd.log
    name: logfile
  - hostPath:
      path: /var/etcd/data
    name: varetcd
  - hostPath:
      path: /etc/ssl/certs
    name: certs
  - hostPath:
      path: /etc/kubernetes/
    name: k8s
status: {}

3台master机器 重复操作3.1-3.2,
参数说明

  • --name=etcd0 每个etcd name都是唯一
  • client-urls 修改对应的机器ip

kubelet 会定时查看manifests目录,拉起 里面的配置文件

3.3 安装kube-apiserver

创建kube-apiserver.yaml,并放到 /etc/kubernetes/manifests

#根据挂载配置创建相关目录
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - command:
    - /bin/sh
    - -c
    - /usr/local/bin/kube-apiserver
      --kubelet-https=true
      --enable-bootstrap-token-auth=true
      --token-auth-file=/etc/kubernetes/token.csv
      --service-cluster-ip-range=10.96.0.0/12
      --tls-cert-file=/etc/kubernetes/pki/kubernetes.pem
      --tls-private-key-file=/etc/kubernetes/pki/kubernetes-key.pem
      --client-ca-file=/etc/kubernetes/pki/ca.pem
      --service-account-key-file=/etc/kubernetes/pki/ca-key.pem
      --insecure-port=9080
      --secure-port=6443
      --insecure-bind-address=0.0.0.0
      --bind-address=0.0.0.0
      --advertise-address=master_IP
      --storage-backend=etcd3
      --etcd-servers=http://master_IP1:2379,http://master_IP2:2379,http://master_IP3:2379
      --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,NodeRestriction
      --allow-privileged=true
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --authorization-mode=Node,RBAC
      --v=2 1>>/var/log/kube-apiserver.log 2>&1
    image: foxchan/google_containers/kube-apiserver-amd64:v1.8.1
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-apiserver
    resources:
      requests:
        cpu: 250m
    volumeMounts:
    - mountPath: /etc/kubernetes/
      name: k8s
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: certs
    - mountPath: /etc/pki
      name: pki
    - mountPath: /var/log/kube-apiserver.log
      name: logfile
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/kubernetes
    name: k8s
  - hostPath:
      path: /etc/ssl/certs
    name: certs
  - hostPath:
      path: /etc/pki
    name: pki
  - hostPath:
      path: /var/log/kube-apiserver.log
    name: logfile
status: {}

参数说明:

  • --advertise-address 修改对应机器的ip
  • --enable-bootstrap-token-auth Bootstrap Token authenticator
  • --authorization-mode 授权模型增加了 Node 参数,因为 1.8 后默认 system:node role 不会自动授予 system:nodes 组
  • 由于以上原因,--admission-control 同时增加了 NodeRestriction 参数

检测:可以看到api已经正常

kubectl --server=https://master_IP:6443 --certificate-authority=/etc/kubernetes/pki/ca.pem  --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem get componentstatuses
NAME                 STATUS      MESSAGE                                                                                        ERROR
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: getsockopt: connection refused   
scheduler            Healthy     ok                                                                                             
etcd-1               Healthy     {"health": "true"}                                                                             
etcd-0               Healthy     {"health": "true"}                                                                             
etcd-2               Healthy     {"health": "true"}

3.4 安装kube-controller-manager

创建kube-controller-manager.yaml,并放到 /etc/kubernetes/manifests

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-controller-manager
    tier: control-plane
  name: kube-controller-manager
  namespace: kube-system
spec:
  containers:
  - command:
    - /bin/sh
    - -c
    - /usr/local/bin/kube-controller-manager 
      --master=127.0.0.1:9080
      --controllers=*,bootstrapsigner,tokencleaner
      --root-ca-file=/etc/kubernetes/pki/ca.pem
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem
      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem
      --service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem
      --leader-elect=true 
      --v=2 1>>/var/log/kube-controller-manager.log 2>&1
    image: foxchan/google_containers/kube-controller-manager-amd64:v1.8.1
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10252
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-controller-manager
    volumeMounts:
    - mountPath: /etc/kubernetes/
      name: k8s
      readOnly: true
    - mountPath: /var/log/kube-controller-manager.log
      name: logfile
    - mountPath: /etc/ssl/certs
      name: certs
    - mountPath: /etc/pki
      name: pki
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/kubernetes
    name: k8s
  - hostPath:
      path: /var/log/kube-controller-manager.log
    name: logfile
  - hostPath:
      path: /etc/ssl/certs
    name: certs
  - hostPath:
      path: /etc/pki
    name: pki
status: {}

参数说明

  • --controllers=*,tokencleaner,bootstrapsigner 启用bootstrap token

3.5 安装kube-scheduler

3.5.1 配置scheduler.conf

cd /etc/kubernetes
export KUBE_APISERVER="https://master_VIP:6443"

# set-cluster
kubectl config set-cluster kubernetes   --certificate-authority=/etc/kubernetes/pki/ca.pem   --embed-certs=true   --server=${KUBE_APISERVER}   --kubeconfig=scheduler.conf

# set-credentials
kubectl config set-credentials system:kube-scheduler   --client-certificate=/etc/kubernetes/pki/scheduler.pem   --embed-certs=true   --client-key=/etc/kubernetes/pki/scheduler-key.pem   --kubeconfig=scheduler.conf

# set-context
kubectl config set-context system:[email protected]   --cluster=kubernetes   --user=system:kube-scheduler   --kubeconfig=scheduler.conf

# set default context
kubectl config use-context system:[email protected] --kubeconfig=scheduler.conf

scheduler.conf文件生成后将这个文件分发到各个Master节点的/etc/kubernetes目录下

3.5.2创建kube-scheduler.yaml,并放到 /etc/kubernetes/manifests

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: kube-scheduler
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - command:
    - /bin/sh
    - -c
    - /usr/local/bin/kube-scheduler 
      --address=127.0.0.1
      --leader-elect=true 
      --kubeconfig=/etc/kubernetes/scheduler.conf 
      --v=2 1>>/var/log/kube-scheduler.log 2>&1
    image: foxchan/google_containers/kube-scheduler-amd64:v1.8.1
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10251
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-scheduler
    resources:
      requests:
        cpu: 100m
    volumeMounts:
    - mountPath: /var/log/kube-scheduler.log
      name: logfile
    - mountPath: /etc/kubernetes/scheduler.conf
      name: kubeconfig
      readOnly: true
  volumes:
  - hostPath:
      path: /var/log/kube-scheduler.log
    name: logfile
  - hostPath:
      path: /etc/kubernetes/scheduler.conf
    name: kubeconfig
status: {}

到这里三个Master节点上的kube-scheduler部署完成,通过选举出一个leader工作。
查看kube-scheduler日志

 tail -f kube-scheduler.log
I1024 05:20:44.704783       7 event.go:218] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-scheduler", UID:"1201fc85-b7e1-11e7-9792-525400b406cc", APIVersion:"v1", ResourceVersion:"87114", FieldPath:""}): type: ‘Normal‘ reason: ‘LeaderElection‘ kvm-sh002154 became leader

查看Kubernetes Master集群各个核心组件的状态全部正常

kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-2               Healthy   {"health": "true"}
etcd-0               Healthy   {"health": "true"}
etcd-1               Healthy   {"health": "true"}
时间: 2024-11-03 11:46:27

kubernetes 1.8 高可用安装(三)的相关文章

kubernetes 1.8 高可用安装(六)

6 .安装kube-dns 下载kube-dns.yaml #获取文件 wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kube-dns.yaml.sed mv kube-dns.yaml.sed kube-dns.yaml #修改配置 sed -i 's/$DNS_SERVER_IP/10.96.0.12/g' kube-dns.yaml  sed -i 's/$DNS

kubernetes 1.8 高可用安装(一)

1.创建证书 1.1 安装cfssl工具 wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/loca

kubernetes 1.8 高可用安装(五)

5安装网络组件calico 安装前需要确认kubelet配置是否已经增加--network-plugin=cni如果没有配置就加到kubelet配置文件里 Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin 5.1先装rbac 官方URLhttps://docs.projectcalico.org/v2.6/getting-s

kubernetes 1.8 高可用安装(二)

2.设置kubeconfig 2.1 设置kubectl的kubeconfig(admin.conf) # 设置集群参数 kubectl config set-cluster kubernetes   --certificate-authority=/etc/kubernetes/pki/ca.pem   --embed-certs=true   --server=https://master_VIP:6443   --kubeconfig=admin.conf # 设置客户端认证参数 kube

最简单的 kubernetes 高可用安装方式

sealos 项目地址:https://github.com/fanux/sealos 本文教你如何用一条命令构建 k8s 高可用集群且不依赖 haproxy 和 keepalived,也无需 ansible.通过内核 ipvs 对 apiserver 进行负载均衡,并且带 apiserver 健康检测.架构如下图所示: 本项目名叫 sealos,旨在做一个简单干净轻量级稳定的 kubernetes 安装工具,能很好的支持高可用安装. 其实把一个东西做的功能强大并不难,但是做到极简且灵活可扩展就

ceph对象存储(rgw)服务、高可用安装配置

ceph对象存储服务.高可用安装配置 简介:    Ceph本质上就是一个rados,利用命令rados就可以访问和使用ceph的对象存储,但作为一个真正产品机的对象存储服务,通常使用的是Restfulapi的方式进行访问和使用.而radosgw其实就是这个作用,安装完radosgw以后,就可以使用api来访问和使用ceph的对象存储服务了.    首先明白一下架构,radosgw其实名副其实,就是rados的一个网关,作用是对外提供对象存储服务.本质上radosgw(其实也是一个命令)和rbd

Mycat高可用解决方案三(读写分离)

Mycat高可用解决方案三(读写分离) 一.系统部署规划 名称 IP 主机名称 配置 Mycat主机01 192.168.199.112 mycat01 2核/2G Mysql主节点 192.168.199.110 mysql01 2核/2G Mysql从节点 192.168.199.111 mysql02 2核/2G 二.软件版本: 进入下载列表地址:http://dl.mycat.io MySQL 版本:mysql-5.7.9下载地址: https://downloads.mysql.com

如何将Rancher 2.1.x 从单节点安装迁移到高可用安装

Rancher提供了两种安装方法,即单节点安装和高可用安装.单节点安装可以让用户快速部署适用于短期开发或PoC的Rancher 2.x,而高可用部署则明显更适合Rancher的长期部署.  要点须知 针对开源用户,对于从单个节点迁移到HA的工作,Rancher Labs不提供官方技术支持. 以防在此过程中出现问题,您应该熟悉Rancher架构以及故障排除的方法. 前期准备 为了顺利将单个节点Rancher安装迁移到高可用性安装,您必须做如下准备: 您需要运行Rancher的2.1.x版本以及RK

Keepalived+Mysql互为主从高可用安装配置

Keepalived+Mysql互为主从高可用安装配置环境介绍:keepalived_vip=192.168.1.210    (写虚拟ip)mysql_master01      eth0:192.168.1.211  eth1:172.20.27.211 (1核1G)mysql_master02      eth0:192.168.1.212  eth1:172.20.27.212 (1核1G) 1.安装mysql数据库(所有节点安装)  //此处省略安装mysql服务2.编辑my.cnf配