k8s1.13.0二进制部署-master节点(三)

部署apiserver

创建生成CSR的JSON配置文件
[[email protected] ssl]# vim kubernetes-csr.json
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.0.123",
    "192.168.0.124",
    "192.168.0.130",
    "10.0.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
生成kubernetes证书和私钥
cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem    -ca-key=/opt/kubernetes/ssl/ca-key.pem    -config=/opt/kubernetes/ssl/ca-config.json    -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
分发证书
[[email protected] ssl]# cp kubernetes*.pem /opt/kubernetes/ssl/
准备软件包

下载二进制包:https://github.com/kubernetes/kubernetes

cd /usr/local/src/
wget https://dl.k8s.io/v1.13.0/kubernetes-server-linux-amd64.tar.gz
tar xf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin/

创建kube-apiserver使用的客户端token文件
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘)
cat > /opt/kubernetes/cfg/token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
创建kube-apiserver配置文件
[[email protected] ~]# vim /opt/kubernetes/cfg/kube-apiserver 

KUBE_APISERVER_OPTS="--logtostderr=false \
--v=4 --log-dir=/opt/kubernetes/log --etcd-servers=https://192.168.0.123:2379,https://192.168.0.125:2379,https://192.168.0.126:2379 \
--bind-address=0.0.0.0 --secure-port=6443 --advertise-address=192.168.0.123 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem  --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/kubernetes/ssl/ca.pem --etcd-certfile=/opt/kubernetes/ssl/etcd.pem --etcd-keyfile=/opt/kubernetes/ssl/etcd-key.pem"
参数说明:

--logtostderr 启用日志
--v 日志等级
--etcd-servers etcd集群地址
--bind-address 监听地址
--secure-port https安全端口
--advertise-address 集群通告地址
--allow-privileged 启用授权
--service-cluster-ip-range Service虚拟IP地址段
--enable-admission-plugins 准入控制模块
--authorization-mode 认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth 启用TLS bootstrap功能,后面会讲到
--token-auth-file token文件
--service-node-port-range Service Node类型默认分配端口范围

创建kube-apiserver系统服务
[[email protected] ~]# vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
启动apiserver
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
systemctl status kube-apiserver
通过url访问api接口
[[email protected] ~]# curl -L --cacert /opt/kubernetes/ssl/ca.pem  https://192.168.0.123:6443/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.0.123:6443"
    }
  ]
}
[[email protected]-master1 ~]# curl -L http://127.0.0.1:8080/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.0.123:6443"
    }
  ]
}

部署Controller Manager

创建配置文件
[[email protected] ~]# vim /opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=4 --log-dir=/opt/kubernetes/log --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  --root-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --experimental-cluster-signing-duration=87600h0m0s"
创建服务文件
[[email protected] ~]# vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
启动服务
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
systemctl status kube-controller-manager

部署scheduller

创建配置文件
[[email protected] ~]# vim /opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=4 --log-dir=/opt/kubernetes/log --master=127.0.0.1:8080 --leader-elect"

--master 连接本地apiserver
--leader-elect 当该组件启动多个时,自动选举(HA)

创建服务文件
[[email protected] ~]# vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
启动服务
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
systemctl status kube-scheduler

master2部署

将master1的配置复制到master2,修改对应ip启动服务即可

scp -r /opt/kubernetes 192.168.0.124:/opt/
scp /usr/lib/systemd/system/kube-* 192.168.0.124:/usr/lib/systemd/system/

访问master2 API接口

[[email protected] ~]# curl -L --cacert /opt/kubernetes/ssl/ca.pem  https://192.168.0.124:6443/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.0.124:6443"
    }
  ]
}
[[email protected]-master2 ~]# curl -L http://127.0.0.1:8080/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.0.124:6443"
    }
  ]
}

配置apiserver的高可用

安装keepalived
yum -y install keepalived

master1 的keepalived配置文件

[[email protected] ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived 

global_defs {
   # 接收邮件地址
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   # 邮件发送地址
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id API_MASTER
} 

vrrp_script check_api {
    script "/etc/keepalived/check_api.sh"
}

vrrp_instance VI_1 {
    state MASTER
    interface ens32
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 100    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.0.130/24
    }
    track_script {
        check_api
    }
}

master2的keepalived配置文件

[[email protected] ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived 

global_defs {
   # 接收邮件地址
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   # 邮件发送地址
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id API_MASTER
} 

vrrp_script check_api {
    script "/etc/keepalived/check_api.sh"
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens32
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 90    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.0.130/24
    }
    track_script {
        check_api
    }
}

准备检查apiserver的脚本
[[email protected] ~]# vim /etc/keepalived/check_api.sh
count=$(ps -ef |grep kube-apiserver |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi

[[email protected]-master1 ~]# chmod +x /etc/keepalived/check_api.sh
启动keepalived
systemctl start keepalived
systemctl enable keepalived
systemctl status keepalived

高可用测试

查看ip信息

[[email protected] ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:8a:2b:5f brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.123/24 brd 192.168.0.255 scope global ens32
       valid_lft forever preferred_lft forever
    inet 192.168.0.130/24 scope global secondary ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe8a:2b5f/64 scope link
       valid_lft forever preferred_lft forever

[[email protected]-master2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:77:dc:9c brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.124/24 brd 192.168.0.255 scope global ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe77:dc9c/64 scope link
       valid_lft forever preferred_lft forever

vip绑定在master1,访问vip

[[email protected] ~]# curl -L --cacert /opt/kubernetes/ssl/ca.pem  https://192.168.0.130:6443/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.0.123:6443"
    }
  ]
}

停止master1的apiserver,再次访问vip

[[email protected] ~]# curl -L --cacert /opt/kubernetes/ssl/ca.pem  https://192.168.0.130:6443/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.0.124:6443"
    }
  ]
}

查看ip信息,vip绑定在master2

[[email protected] ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:8a:2b:5f brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.123/24 brd 192.168.0.255 scope global ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe8a:2b5f/64 scope link
       valid_lft forever preferred_lft forever

[[email protected]-master2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:77:dc:9c brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.124/24 brd 192.168.0.255 scope global ens32
       valid_lft forever preferred_lft forever
    inet 192.168.0.130/24 scope global secondary ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe77:dc9c/64 scope link
       valid_lft forever preferred_lft forever

配置kubectl命令行工具

创建admin证书签名请求

[[email protected] ssl]# vim admin-csr.json
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}

生成admin证书和密钥

[[email protected] ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem    -ca-key=/opt/kubernetes/ssl/ca-key.pem    -config=/opt/kubernetes/ssl/ca-config.json    -profile=kubernetes admin-csr.json | cfssljson -bare admin

[[email protected] ssl]# cp admin*.pem /opt/kubernetes/ssl/

设置集群参数

kubectl config set-cluster kubernetes   --certificate-authority=/opt/kubernetes/ssl/ca.pem   --embed-certs=true   --server=https://192.168.0.130:6443

设置客户端认证参数

kubectl config set-credentials admin   --client-certificate=/opt/kubernetes/ssl/admin.pem   --embed-certs=true   --client-key=/opt/kubernetes/ssl/admin-key.pem

设置上下文参数

kubectl config set-context kubernetes    --cluster=kubernetes    --user=admin

设置默认上下文

kubectl config use-context kubernetes

查看集群信息

[[email protected] ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"} 

原文地址:https://www.cnblogs.com/yuezhimi/p/10132549.html

时间: 2024-10-08 04:20:36

k8s1.13.0二进制部署-master节点(三)的相关文章

k8s1.13.0二进制部署-node节点(四)

Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署.认证大致工作流程如图所示: 准备二进制文件 scp kubelet kube-proxy 192.168.0.125:/opt/kube

k8s1.13.0二进制部署Dashboard(五)

部署UI 下载yaml文件https://github.com/kubernetes/kubernetes [[email protected] ~]# git clone https://github.com/kubernetes/kubernetes.git [[email protected] ~]# cd kubernetes/cluster/addons/dashboard/ [[email protected]-master1 dashboard]# ll total 32 -rw-

K8S二进制部署master节点

在完成前面的K8S基础组件配置之后,我们就可以正式开始K8S的部署工作.本文介绍在k8s master组件的二进制部署过程,由于环境为内网开发和测试环境,所以仅考虑etcd组件的高可用,api-server.controller-manager和scheduler的高可用暂不考虑,后续可以使用keepalive的方式实现. 一.软件包下载地址Server包: https://dl.k8s.io/v1.9.6/kubernetes-server-linux-amd64.tar.gz 二.部署mas

kubeadm部署k8s1.9高可用集群--4部署master节点

部署master节点 kubernetes master 节点包含的组件: kube-apiserver kube-scheduler kube-controller-manager 本文档介绍部署一个三节点高可用 master 集群的步骤,分别命名为k8s-host1.k8s-host2.k8s-host3: k8s-host1:172.16.120.154 k8s-host2:172.16.120.155 k8s-host3:172.16.120.156 安装docker 在每台主机安装do

4.K8S部署-------- Master节点部署

部署Kubernetes API服务部署 如果没有特别指定在那台服务器执行命令.只需要按照文中的步骤执行即可 0.准备软件包 [[email protected] ~]# cd /usr/local/src/kubernetes      #kubernets解压的目录 [[email protected] kubernetes]# cp server/bin/kube-apiserver /opt/kubernetes/bin/ [[email protected] kubernetes]#

kubernetes容器集群管理部署master节点组件

集群部署获取k8s二进制包 [[email protected] ~]# wget https://dl.k8s.io/v1.15.0/kubernetes-server-linux-amd64.tar.gz [[email protected] ~]# ls kubernetes-server-linux-amd64.tar.gz [[email protected] ~]# mkdir master [[email protected] ~]# mv kubernetes-server-li

[转贴]CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群

CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群 http://blog.51cto.com/10880347/2326146 一.概述 kubernetes 1.13 已发布,这是 2018 年年内第四次也是最后一次发布新版本.Kubernetes 1.13 是迄今为止发布间隔最短的版本之一(与上一版本间隔十周),主要关注 Kubernetes 的稳定性与可扩展性,其中存储与集群生命周期相关的三项主要功能已逐步实现普遍可用性. Kubernetes 1.13 的核心

kubeadm部署kubernetes v1.17.4 高可用master节点

环境说明: #操作系统:centos7 #docker版本:19.03.8 #kubernetes版本:v1.17.4 #K8S master 节点IP:192.168.2.175,192.168.2.176,192.168.2.177 #K8S worker节点IP:192.168.2.185,192.168.2.185 #网络插件:flannel #kube-proxy网络转发: ipvs #kubernetes源:使用阿里云源 #service-cidr:10.96.0.0/16 #pod

k8s二进制部署

k8s二进制部署 1.环境准备 主机名 ip地址 角色 k8s-master01 10.0.0.10 master k8s-master02 10.0.0.11 master k8s-node01 10.0.0.12 node k8s-node02 10.0.0.13 node 初始化操作 关闭防火墙 关闭selinux 关闭swap 安装ntp使时间同步 配置域名解析 配置免密 k8s-master01 到其他机器. 安装docker 2.生成配置CFSSL CFFSL能够构建本地CA,生成后