k8s集群之master节点部署

apiserver的部署

api-server的部署脚本
[[email protected] k8s]# cat apiserver.sh
#!/bin/bash

MASTER_ADDRESS=$1   主节点IP
ETCD_SERVERS=$2        etcd地址

cat <<EOF >/opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \--v=4 \--etcd-servers=${ETCD_SERVERS} \--bind-address=${MASTER_ADDRESS} \--secure-port=6443 \--advertise-address=${MASTER_ADDRESS} \--allow-privileged=true \--service-cluster-ip-range=10.0.0.0/24 \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \--authorization-mode=RBAC,Node \--kubelet-https=true \--enable-bootstrap-token-auth \--token-auth-file=/opt/kubernetes/cfg/token.csv \--service-node-port-range=30000-50000 \--tls-cert-file=/opt/kubernetes/ssl/server.pem  \--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \--client-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \--etcd-cafile=/opt/etcd/ssl/ca.pem \--etcd-certfile=/opt/etcd/ssl/server.pem \--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

  下载二进制包

[[email protected] k8s]# wget https://dl.k8s.io/v1.10.13/kubernetes-server-linux-amd64.tar.gz

  解压安装

[[email protected] k8s]# tar xf kubernetes-server-linux-amd64.tar.gz
[[email protected] k8s]# cd kubernetes/server/bin/
[[email protected] bin]# ls
apiextensions-apiserver              cloud-controller-manager.tar  kube-apiserver             kube-controller-manager             kubectl     kube-proxy.docker_tag  kube-scheduler.docker_tag
cloud-controller-manager             hyperkube                     kube-apiserver.docker_tag  kube-controller-manager.docker_tag  kubelet     kube-proxy.tar         kube-scheduler.tar
cloud-controller-manager.docker_tag  kubeadm                       kube-apiserver.tar         kube-controller-manager.tar         kube-proxy  kube-scheduler         mounter
[[email protected] ~]# mkdir /opt/kubernetes/{cfg,ssl,bin} -pv
mkdir: 已创建目录 "/opt/kubernetes"
mkdir: 已创建目录 "/opt/kubernetes/cfg"
mkdir: 已创建目录 "/opt/kubernetes/ssl"
mkdir: 已创建目录 "/opt/kubernetes/bin"
[[email protected] bin]# cp kube-apiserver kube-controller-manager kube-scheduler /opt/kubernetes/bin/
[[email protected] k8s]# ./apiserver.sh 192.168.10.11 https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379
[[email protected] k8s]# cd /opt/kubernetes/cfg/
[[email protected] cfg]# vi kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=false --log-dir=/opt/kubernetes/logs \    定义日志目录;注意创建此目录
--v=4 --etcd-servers=https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379 --bind-address=192.168.10.11 \   绑定的IP地址
--secure-port=6443 \   端口基于https通信的
--advertise-address=192.168.10.11 \    集群通告地址;其他节点访问通告这个IP
--allow-privileged=true \       容器层的授权
--service-cluster-ip-range=10.0.0.0/24 \   负责均衡的虚拟IP
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \  启用准入插件;决定是否要启用一些高级功能
--authorization-mode=RBAC,Node \    认证模式
--kubelet-https=true \  api-server主动访问kubelet是使用https协议
--enable-bootstrap-token-auth \   认证客户端并实现自动颁发证书
--token-auth-file=/opt/kubernetes/cfg/token.csv \      指定token文件
--service-node-port-range=30000-50000 \   node认证端口范围
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \   apiserver 证书文件
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \   ca证书
--etcd-cafile=/opt/etcd/ssl/ca.pem \   etcd   证书
--etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem"

  生成证书与token文件

[[email protected] k8s]# cat k8s-cert.sh
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
      	    "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "10.206.176.19",  master IP
      "10.206.240.188",  LB;node节点不用写,写上也不错
      "10.206.240.189",  LB:
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#-----------------------

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#-----------------------

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[[email protected] k8s]# bash k8s-cert.sh
2019/04/22 18:05:08 [INFO] generating a new CA key and certificate from CSR
2019/04/22 18:05:08 [INFO] generate received request
2019/04/22 18:05:08 [INFO] received CSR
2019/04/22 18:05:08 [INFO] generating key: rsa-2048
2019/04/22 18:05:09 [INFO] encoded CSR
2019/04/22 18:05:09 [INFO] signed certificate with serial number 631400127737303589248201910249856863284562827982
2019/04/22 18:05:09 [INFO] generate received request
2019/04/22 18:05:09 [INFO] received CSR
2019/04/22 18:05:09 [INFO] generating key: rsa-2048
2019/04/22 18:05:10 [INFO] encoded CSR
2019/04/22 18:05:10 [INFO] signed certificate with serial number 99345466047844052770348056449571016254842578399
2019/04/22 18:05:10 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2019/04/22 18:05:10 [INFO] generate received request
2019/04/22 18:05:10 [INFO] received CSR
2019/04/22 18:05:10 [INFO] generating key: rsa-2048
2019/04/22 18:05:11 [INFO] encoded CSR
2019/04/22 18:05:11 [INFO] signed certificate with serial number 309283889504556884051139822527420141544215396891
2019/04/22 18:05:11 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2019/04/22 18:05:11 [INFO] generate received request
2019/04/22 18:05:11 [INFO] received CSR
2019/04/22 18:05:11 [INFO] generating key: rsa-2048
2019/04/22 18:05:11 [INFO] encoded CSR
2019/04/22 18:05:11 [INFO] signed certificate with serial number 286610519064253595846587034459149175950956557113
2019/04/22 18:05:11 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[[email protected] k8s]# ls
admin.csr       apiserver.sh    ca-key.pem             etcd-cert.sh  kube-proxy.csr       kubernetes                            scheduler.sh     server.pem
admin-csr.json  ca-config.json  ca.pem                 etcd.sh       kube-proxy-csr.json  kubernetes-server-linux-amd64.tar.gz  server.csr
admin-key.pem   ca.csr          controller-manager.sh  k8s-cert      kube-proxy-key.pem   kubernetes.tar.gz                     server-csr.json
admin.pem       ca-csr.json     etcd-cert              k8s-cert.sh   kube-proxy.pem       master.zip

[[email protected] k8s]# cp ca-key.pem ca.pem server-key.pem server.pem /opt/kubernetes/ssl/

[[email protected] k8s]# cat token.csv
0fb61c46f8991b718eb38d27b605b008,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[[email protected] k8s]# mv token.csv  /opt/kubernetes/cfg/

  启动apiserver

[[email protected] k8s]# systemctl start kube-apiserver
[[email protected] k8s]# ps -ef | grep apiserver
root       3264      1 99 20:35 ?        00:00:01 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --log-dir=/opt/kubernetes/logs --v=4 --etcd-servers=https://192.168.10.11:2379,https:/
/192.168.10.12:2379,https://192.168.10.13:2379 --bind-address=192.168.10.11 --secure-port=6443 --advertise-address=192.168.10.11 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pemroot       3274   1397  0 20:35 pts/0    00:00:00 grep --color=auto apiserver

  生成配置文件并启动controller-manager

[[email protected] k8s]# cat controller-manager.sh
#!/bin/bash

MASTER_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\   日志配置
--v=4 \--master=${MASTER_ADDRESS}:8080 \\  apimaster端口
--leader-elect=true \--address=127.0.0.1 \\
--service-cluster-ip-range=10.0.0.0/24 \--cluster-name=kubernetes \--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \--root-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \--experimental-cluster-signing-duration=87600h0m0s"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
[[email protected] k8s]# bash controller-manager.sh 127.0.0.1   输入masterIP
[[email protected] k8s]# ss -lntp
State       Recv-Q Send-Q                                                  Local Address:Port                                                                 Peer Address:Port
LISTEN      0      128                                                     192.168.10.11:6443                                                                            *:*
users:(("kube-apiserver",pid=7604,fd=6))LISTEN      0      128                                                     192.168.10.11:2379                                                                            *:*
users:(("etcd",pid=1428,fd=7))LISTEN      0      128                                                         127.0.0.1:2379                                                                            *:*
users:(("etcd",pid=1428,fd=6))LISTEN      0      128                                                         127.0.0.1:10252                                                                           *:*
users:(("kube-controller",pid=7593,fd=3))LISTEN      0      128                                                     192.168.10.11:2380                                                                            *:*
users:(("etcd",pid=1428,fd=5))LISTEN      0      128                                                         127.0.0.1:8080                                                                            *:*
users:(("kube-apiserver",pid=7604,fd=5))LISTEN      0      128                                                                 *:22                                                                              *:*
users:(("sshd",pid=902,fd=3))LISTEN      0      100                                                         127.0.0.1:25                                                                              *:*
users:(("master",pid=1102,fd=13))LISTEN      0      128                                                                :::10257                                                                          :::*
users:(("kube-controller",pid=7593,fd=5))LISTEN      0      128                                                                :::22                                                                             :::*
users:(("sshd",pid=902,fd=4))LISTEN      0      100                                                               ::1:25                                                                             :::*
users:(("master",pid=1102,fd=14))

  生成配置文件,并启动scheduler

[[email protected] k8s]# cat scheduler.sh
#!/bin/bash

MASTER_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS="--logtostderr=true \--v=4 \--master=${MASTER_ADDRESS}:8080 \--leader-elect"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
[[email protected] k8s]# bash scheduler.sh 127.0.0.1
[[email protected] k8s]# ss -lntp
State       Recv-Q Send-Q                                                  Local Address:Port                                                                 Peer Address:Port
LISTEN      0      128                                                     192.168.10.11:2379                                                                            *:*
users:(("etcd",pid=1428,fd=7))LISTEN      0      128                                                         127.0.0.1:2379                                                                            *:*
users:(("etcd",pid=1428,fd=6))LISTEN      0      128                                                         127.0.0.1:10252                                                                           *:*
users:(("kube-controller",pid=7809,fd=3))LISTEN      0      128                                                     192.168.10.11:2380                                                                            *:*
users:(("etcd",pid=1428,fd=5))LISTEN      0      128                                                                 *:22                                                                              *:*
users:(("sshd",pid=902,fd=3))LISTEN      0      100                                                         127.0.0.1:25                                                                              *:*
users:(("master",pid=1102,fd=13))LISTEN      0      128                                                                :::10251                                                                          :::*
users:(("kube-scheduler",pid=8073,fd=3))LISTEN      0      128                                                                :::10257                                                                          :::*
users:(("kube-controller",pid=7809,fd=5))LISTEN      0      128                                                                :::22                                                                             :::*
users:(("sshd",pid=902,fd=4))LISTEN      0      100                                                               ::1:25                                                                             :::*
users:(("master",pid=1102,fd=14))

  配置文件

[[email protected] k8s]# cat /opt/kubernetes/cfg/kube-controller-manager 

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 \  API连接地址
--leader-elect=true \    自动做高可用选举
--address=127.0.0.1 \    地址,不对外提供服务
--service-cluster-ip-range=10.0.0.0/24 \  地址范围与apiserver配置一样
--cluster-name=kubernetes \    名字
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \签名
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \   签名
--root-ca-file=/opt/kubernetes/ssl/ca.pem \  根证书
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"   有效时间

  配置文件

[[email protected] k8s]# cat /opt/kubernetes/cfg/kube-scheduler 

KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"

  将客户端工具复制到/usr/bin目录下

[[email protected] k8s]# cp kubernetes/server/bin/kubectl /usr/bin/

  查看集群状态

[[email protected] k8s]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
etcd-2               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}
controller-manager   Healthy   ok

  

原文地址:https://www.cnblogs.com/rdchenxi/p/10754366.html

时间: 2024-08-29 11:42:51

k8s集群之master节点部署的相关文章

生产环境二进制k8s集群扩容node节点的实践

K8s二进制生产环境扩容node节点由于项目微服务也是部署在k8s集群中去维护的,所以扩容node节点也是必要的联系,扩容node节点一定要保证你整个集群的容器环境的网络都是互通的,这也是很重要的一步,这里我根据自己的经验去扩容,仅供参考首先我这里是安装的二进制方式去部署的k8s集群,进行扩容node的时候,也是非常方便的扩容node节点分为两步,第一步先将我们旧的node节点上的配置先去拷贝到我们新的节点上,第二点就是将我们的容器网络环境打通这里我是直接扩容两个node节点.第一步: 我们先去

CDH 集群环境Master节点IP变更

因为集群中的节点服务器都是通过DHCP自动分配IP,原则上重启了IP也不会变更,因为在启动的时候已经为Mac地址固定分配了一个IP地址,除非Mac地址变更.巧合的是,昨天早上扫地大妈把某Master的节点服务器因为擦桌子而把网线给扯掉了,等我发现该节点连接不上的时候,重新插上网线之后结果Ip变了.想了很多中的方式,将所有节点的Ip设置为手动配置,但是手动配置IP,该节点就无法与其他节点通信,网络连接失败,更不能连接Internet.百思不得其解.  无奈只能再此将Master节点按照自动分配的I

redis集群删除master节点

1.首先把该master节点下的哈希槽转移到其他节点下 执行下面命令: redis-trib.rb reshard 192.168.139.30:7000 (接受哈希槽的redis节点,不是待删除的redis节点) 随后会提示接受哈希槽的node ID: 即192.168.139.30:7000的node ID 转移哈希槽的node ID,然后再输入done 表示输入完毕 2.然后删除节点就可以了. redis-trib.rb del-node 192.168.139.30:7003 0a2e3

K8S集群安装 之 安装部署controller-manager

一.在两个nodes节点上安装controller-manager服务 # 221/222机器: bin]# vi /opt/kubernetes/server/bin/kube-controller-manager.sh #!/bin/sh ./kube-controller-manager --cluster-cidr 172.7.0.0/16 --leader-elect true --log-dir /data/logs/kubernetes/kube-controller-manage

记二进制搭建k8s集群完成后,部署时容器一直在创建中的问题

gcr.io/google_containers/pause-amd64:3.0这个容器镜像国内不能下载容器一直创建中是这个原因 在kubelet.service中配置 systemctl daemon-reload systemctl restart kubelet 重新部署应用发现一切搞定 原文地址:https://www.cnblogs.com/java-le/p/11135606.html

使用Rancher Server部署本地多节点K8S集群

当我第一次开始我的Kubernetes之旅时,我一直在寻找一种设置本地部署环境的方式.很多人常常会使用minikube或microk8s,这两者非常适合新手在单节点集群环境下进行操作.但当我已经了解了基础知识之后,这两者显然不太够用,我需要进一步寻找能够运行本地多节点集群.与生产环境更相似的平台.为此,我查阅了许多参考资料,最后我找到了Rancher Server.接下来,我要介绍我是如何设置我的本地K8S多节点集群的.  准备master节点和worker节点的虚拟机  上图显示了集群的架构,

将 master 节点服务器从 k8s 集群中移除并重新加入

背景 1 台 master 加入集群后发现忘了修改主机名,而在 k8s 集群中修改节点主机名非常麻烦,不如将 master 退出集群改名并重新加入集群(前提是用的是高可用集群). 操作步骤 ssh 登录另外一台 master 节点将要改名的 master 节点移出集群. kubectl drain blog-k8s-n0 kubectl delete node blog-k8s-n0 登录已退出集群的 master 服务器重置 kubelet 配置并重新加入集群. kubeadm reset k

jinkens+gitlab针对k8s集群实现CI/CD

一.环境准备 k8s集群环境(我这里是三台的K8s集群): 单独一台docker服务器,主要用于向私有仓库上传镜像,Jenkins和gitlab也部署在这台服务器: 上述环境共计服务器4台,均指向同一个私有仓库,以便共享docker镜像: 服务器IP依次为192.168.20.2.20.3.20.4.20.5(前三个IP为K8s集群中的节点) Jenkins采用war包的方式部署,需要用到tomcat环境,自行参考博文,进行部署:其他环境部署可以参考以下博文:Tomcat安装及优化配置:Dock

k8s集群之日志收集EFK架构

参考文档 http://tonybai.com/2017/03/03/implement-kubernetes-cluster-level-logging-with-fluentd-and-elasticsearch-stack/ https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch https://t.goodrain.com/t/k8s/242 http://logz