K8S单master部署三:APIserver+Controller-Manager+Schedul

以下所有操作均在master端进行


服务器角色分配

角色 地址 安装组件
master 192.168.142.220 kube-apiserver kube-controller-manager kube-scheduler etcd
node1 192.168.142.136 kubelet kube-proxy docker flannel etcd
node2 192.168.142.132 kubelet kube-proxy docker flannel etcd

一、APIserver服务部署

建立apiserver安装站点

[[email protected] k8s]# pwd
/k8s
[[email protected] k8s]# mkdir apiserver
[[email protected] k8s]# cd apiserver/

建立ca证书(注意路径问题!!)

//定义ca证书,生成ca证书配置文件
[[email protected] apiserver]# cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

//生成证书签名文件
[[email protected] apiserver]# cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
         "algo": "rsa",
         "size": 2048
    },
    "names": [
       {
              "C": "CN",
              "L": "Beijing",
              "ST": "Beijing",
              "O": "k8s",
              "OU": "System"
       }
    ]
}
EOF

//证书签名(生成ca.pem ca-key.pem)
[[email protected] apiserver]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

建立apiserver通信证书

//定义apiserver证书,生成apiserver证书配置文件
[[email protected] apiserver]# cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.142.220",  //master1(注意地址变更)
      "192.168.142.120",  //master2(后期双节点)
      "192.168.142.20",    //vip
      "192.168.142.130",  //lb nginx负载均衡(master)
      "192.168.142.140",  //lb (backup)
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

//证书签名(生成server.pem server-key.pem)
[[email protected] apiserver]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

建立admin证书

[[email protected] apiserver]# cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

//证书签名(生成admin.pem admin-key.epm)
[[email protected] apiserver]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

建立kube-proxy证书

[[email protected] apiserver]# cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

//证书签名(生成kube-proxy.pem kube-proxy-key.pem)
[[email protected] apiserver]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

总共应该生成8个证书

[[email protected] apiserver]# ls *.pem
admin-key.pem  ca-key.pem  kube-proxy-key.pem  server-key.pem
admin.pem      ca.pem      kube-proxy.pem      server.pem

复制启动命令

//建立存放站点
[[email protected] apiserver]# mkdir -p /opt/kubernetes/{bin,ssl,cfg}
[[email protected] apiserver]# cp -p *.pem /opt/kubernetes/ssl/

//复制启动脚本
[[email protected] k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
[[email protected] k8s]# cd kubernetes/server/bin/
[[email protected] bin]# cp -p kube-apiserver kubectl /opt/kubernetes/bin/

创建token文件

[[email protected] bin]# cd /opt/kubernetes/cfg

//生成随机的令牌
[[email protected] cfg]# export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘)
[[email protected] cfg]# cat > token.csv << EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

创建apiserver启动脚本

[[email protected] cfg]# vim /usr/lib/systemd/system/kube-apiserver.service
//手动进行编写
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

//提权方便识别
[[email protected] cfg]# chmod +x /usr/lib/systemd/system/kube-apiserver.service

创建apiserver配置文件

[[email protected] ssl]# vim /opt/kubernetes/cfg/kube-apiserver
//进行手工编写,注意IP地址的变更
KUBE_APISERVER_OPTS="--logtostderr=true --v=4 --etcd-servers=https://192.168.142.220:2379,https://192.168.142.136:2379,https://192.168.142.132:2379 --bind-address=192.168.142.220 --secure-port=6443 --advertise-address=192.168.142.220 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem  --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem"

[[email protected] ssl]# mkdir -p /var/log/kubernetes/apiserver

apiserver服务启动

[[email protected] cfg]# systemctl daemon-reload
[[email protected] cfg]# systemctl start kube-apiserver
[[email protected] cfg]# systemctl status kube-apiserver
[[email protected] cfg]# systemctl enable kube-apiserver

检查服务启动情况

[[email protected] bin]# netstat -atnp | egrep "(6443|8080)"
//6443为http使用端口;8080位https使用端口
tcp        0      0 192.168.142.220:6443    0.0.0.0:*               LISTEN      12898/kube-apiserve
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      12898/kube-apiserve
tcp        0      0 192.168.142.220:6443    192.168.142.220:60052   ESTABLISHED 12898/kube-apiserve
tcp        0      0 192.168.142.220:60052   192.168.142.220:6443    ESTABLISHED 12898/kube-apiserve


二、Controller-Manager服务部署

移动控制命令

[[email protected] bin]# pwd
/k8s/kubernetes/server/bin
//移动脚本
[[email protected] bin]# cp -p kube-controller-manager /opt/kubernetes/bin/

编写kube-controller-manager配置文件

[[email protected] bin]# cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  --root-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"

EOF

编写kube-controller-manager启动脚本

[[email protected] bin]# cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

启动服务

//提权并启动
[[email protected] cfg]# chmod +x /usr/lib/systemd/system/kube-controller-manager.service
[[email protected] cfg]# systemctl start kube-controller-manager
[[email protected] cfg]# systemctl status kube-controller-manager
[[email protected] cfg]# systemctl enable kube-controller-manager

查看服务启动情况

[[email protected] bin]# netstat -atnp | grep kube-controll
tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      12964/kube-controll
tcp6       0      0 :::10257                :::*                    LISTEN      12964/kube-controll


三、Scheruler服务部署

移动控制命令

[[email protected] bin]# pwd
/k8s/kubernetes/server/bin
//移动脚本
[[email protected] bin]# cp -p kube-scheduler /opt/kubernetes/bin/

编写配置文件

[[email protected] bin]# cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
EOF

编写启动脚本

[[email protected] bin]# cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

开启服务

[[email protected] bin]# chmod +x /usr/lib/systemd/system/kube-scheduler.service
[[email protected] bin]# systemctl daemon-reload
[[email protected] bin]# systemctl start kube-scheduler
[[email protected] bin]# systemctl status kube-scheduler
[[email protected] bin]# systemctl enable kube-scheduler

查看服务启动情况

[[email protected] bin]# netstat -atnp | grep schedule
tcp6       0      0 :::10251                :::*                    LISTEN



以上,就是master节点上需要部署的所有服务的全部部署过程。

//查看master节点状态
[[email protected] bin]# /opt/kubernetes/bin/kubectl get cs
//如果成功则应该全部为healthy
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}

未完待续~~~

原文地址:https://blog.51cto.com/14484404/2469551

时间: 2024-10-08 02:39:29

K8S单master部署三:APIserver+Controller-Manager+Schedul的相关文章

K8S单master部署一:环境规划、ETCD部署

实验环境规划 概述 使用VMwork虚拟机部署单master双node的小型集群,并且在master和node上都安装etcd来实现etcd集群. 软件采用版本 软件名称 版本 Linux系统 Linux version 4.8.5 Kubernetes 1.9 Docker Docker version 19.03.5 Etcd v3.3.10 服务器角色分配 角色 地址 安装组件 master 192.168.142.220 kube-apiserver kube-controller-ma

Kubernetes(九)单Master 部署API Server、controller-manager、scheduler

一.在master节点下载二进制包 https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1161 二.选择kubernetes-server-linux-amd64.tar.gz下载 wget https://storage.googleapis.com/kubernetes-release/release/v1.16.6/kubernetes-server-linux-amd64.tar.gz 三.解压

Kubernetes(K8s)安装部署过程(四)--Master节点安装

再次明确下架构:  三台虚拟机 centos 7.4系统,docker为17版本,ip为10.10.90.105到107,其中105位master,接下来的master相关组件安装到此机器上. etcd集群为3台,分别复用这3台虚拟机. 作为k8s的核心,master节点主要包含三个组件,分别是: 三个组件:kube-apiserver kube-scheduler kube-controller-manager 这个三个组件密切联系 1.创建TLS证书 这些证书我们在第一篇文章中已经创建,共8

快速部署kubernetes单master集群-学习使用

五.ubuntu16/Centos7上部署安装k8s1.9(二进制包) 5.1 主机节点规划 角色 主机名 主机ip 组件 etcd etcd 192.168.0.106 etcd master kube-master 192.168.0.107 kube-apiserver,kube-controller-manager,kube-scheduler node1 kube-node1 192.168.0.108 kubelet,kube-proxy,docker node2 kube-node

Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)

0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 master 服务器的组件有:kube-apiserver.kube-controller-manager.kube-scheduler 因此需要下载k8s master,下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGE

rancher server 单节点部署/K8S高可用部署

环境说明: # 操作系统:centos7 # docker版本:19.03.5 # rancher版本: latest # rancher server 节点IP :192.168.2.175 # rancher agent节点IP: 192.168.2.175,192.168.2.176,192.168.2.177,192.168.2.185,192.168.2.187 # K8S master 节点IP:192.168.2.176,192.168.2.177,192.168.2.185 #

rancher三节点k8s集群部署例子

rancher三节点k8s集群部署例子 待办 https://rorschachchan.github.io/2019/07/25/使用Rancher2-1部署k8s/ 原文地址:https://www.cnblogs.com/lishikai/p/12310449.html

纯手工搭建K8s(单节点)

准备说明: 因为为纯手动搭建,所以针对安装时需要的一些安装包需提前下载好 cfssl_linux-amd64. cfssljson_linux-amd64. cfssl-certinfo_linux-amd64. etcd-v3.3.10-linux-amd64.tar.gz. flannel-v0.11.0-linux-amd64.tar.gz. kubernetes-server-linux-amd64.tar.gz(网络环境原因可以上github上下载或者找台vps下载好然后再从vps上拉

K8s多节点部署详细步骤,附UI界面的搭建

K8s多节点部署 需要准备的环境: 6台centos7设备:192.168.1.11 master01192.168.1.12 node1192.168.1.13 node2192.168.1.14 master02192.168.1.15 lb1192.168.1.16 lb2VIP:192.168.1.100 实验步骤: 1:自签ETCD证书 2:ETCD部署 3:Node安装docker 4:Flannel部署(先写入子网到etcd)---------master----------5:自