RKE K8S 集群增删节点

rke 删除节点:

修改cluster.yal 将需要删除的节点配置删除,然后运行

[[email protected] rke]# more cluster.yml
nodes:
  - address: 172.20.101.103
    user: ptmind
    role: [controlplane,worker,etcd]
  - address: 172.20.101.104
    user: ptmind
    role: [controlplane,worker,etcd]
  - address: 172.20.101.105
    user: ptmind
    role: [controlplane,worker,etcd]

删除#  - address: 172.20.101.106
删除#    user: ptmind
删除#    role: [worker]
删除#    labels: {traefik: traefik-outer}

执行删除节点操作

rke up --update-only

查看输出信息:

INFO[0010] [reconcile] Check etcd hosts to be deleted
INFO[0010] [reconcile] Check etcd hosts to be added
INFO[0010] [hosts] Cordoning host [172.20.101.106]
INFO[0010] [hosts] Deleting host [172.20.101.106] from the cluster
INFO[0010] [hosts] Successfully deleted host [172.20.101.106] from the cluster
INFO[0010] [dialer] Setup tunnel for host [172.20.101.106]
INFO[0011] [worker] Tearing down Worker Plane..
INFO[0011] [remove/kubelet] Successfully removed container on host [172.20.101.106]
INFO[0011] [remove/kube-proxy] Successfully removed container on host [172.20.101.106]
INFO[0011] [remove/nginx-proxy] Successfully removed container on host [172.20.101.106]
INFO[0011] [remove/service-sidekick] Successfully removed container on host [172.20.101.106]
INFO[0011] [worker] Successfully tore down Worker Plane..
INFO[0011] [hosts] Cleaning up host [172.20.101.106]
INFO[0011] [hosts] Running cleaner container on host [172.20.101.106]
INFO[0012] [kube-cleaner] Successfully started [kube-cleaner] container on host [172.20.101.106]
INFO[0012] Waiting for [kube-cleaner] container to exit on host [172.20.101.106]
INFO[0012] Container [kube-cleaner] is still running on host [172.20.101.106]
INFO[0013] Waiting for [kube-cleaner] container to exit on host [172.20.101.106]
INFO[0013] [hosts] Removing cleaner container on host [172.20.101.106]
INFO[0013] [hosts] Removing dead container logs on host [172.20.101.106]
INFO[0014] [cleanup] Successfully started [rke-log-cleaner] container on host [172.20.101.106]
INFO[0014] [remove/rke-log-cleaner] Successfully removed container on host [172.20.101.106]
INFO[0014] [hosts] Successfully cleaned up host [172.20.101.106

添加节点:

修改cluster.yal 将需要添加的节点配置,然后运行

[[email protected] rke]# more cluster.yml
nodes:
  - address: 172.20.101.103
    user: ptmind
    role: [controlplane,worker,etcd]
  - address: 172.20.101.104
    user: ptmind
    role: [controlplane,worker,etcd]
  - address: 172.20.101.105
    user: ptmind
    role: [controlplane,worker,etcd]

  - address: 172.20.101.106
    user: ptmind
    role: [worker]
    labels: {traefik: traefik-outer}

执行添加节点操作

rke up --update-only

查看输出信息:

INFO[0025] [worker] Building up Worker Plane..
INFO[0025] [worker] Successfully started [nginx-proxy] container on host [172.20.101.106]
INFO[0026] [worker] Successfully started [rke-log-linker] container on host [172.20.101.106]
INFO[0026] [remove/rke-log-linker] Successfully removed container on host [172.20.101.106]
INFO[0027] [worker] Successfully started [kubelet] container on host [172.20.101.106]
INFO[0027] [healthcheck] Start Healthcheck on service [kubelet] on host [172.20.101.106]
INFO[0032] [healthcheck] service [kubelet] on host [172.20.101.106] is healthy
INFO[0032] [worker] Successfully started [rke-log-linker] container on host [172.20.101.106]
INFO[0033] [remove/rke-log-linker] Successfully removed container on host [172.20.101.106]
INFO[0033] [worker] Successfully started [kube-proxy] container on host [172.20.101.106]
INFO[0033] [healthcheck] Start Healthcheck on service [kube-proxy] on host [172.20.101.106]
INFO[0038] [healthcheck] service [kube-proxy] on host [172.20.101.106] is healthy
INFO[0039] [worker] Successfully started [rke-log-linker] container on host [172.20.101.106]
INFO[0039] [remove/rke-log-linker] Successfully removed container on host [172.20.101.106]
INFO[0039] [worker] Successfully started Worker Plane.. 

原文地址:https://blog.51cto.com/michaelkang/2435823

时间: 2024-11-08 11:06:51

RKE K8S 集群增删节点的相关文章

使用Kubeadm创建k8s集群之节点部署(三十一)

前言 本篇部署教程将讲述k8s集群的节点(master和工作节点)部署,请先按照上一篇教程完成节点的准备.本篇教程中的操作全部使用脚本完成,并且对于某些情况(比如镜像拉取问题)还提供了多种解决方案.不过基于部署环境和k8s的复杂性,我们需要对k8s集群部署过程中的一些步骤都有所了解,尤其是“kubeadm init”命令. 目录 主节点部署  Kubeadm以及相关工具包的安装 批量拉取k8s相关镜像 使用“kubeadm init”启动k8s主节点 启动k8s主节点 kubectl认证 安装f

[转帖]k8s集群node节点一直NotReady, 且node节点(并非master)的kubelet报错:Unable to update cni config: No networks found in /etc/cni/net.d

k8s集群node节点一直NotReady, 且node节点(并非master)的kubelet报错:Unable to update cni config: No networks found in /etc/cni/net.d http://www.voidcn.com/article/p-wpuagtbj-byy.html ? 考虑到node节点的kubelet报错Unable to update cni config: No networks found in /etc/cni/net.

k8s集群新增节点

节点为centos7.4 一.node节点基本环境配置 1.配置主机名 2.配置hosts文件(master和node相互解析) 3.时间同步 ntpdate pool.ntp.org date echo '*/10 * * * * /usr/sbin/ntpdate pool.ntp.org' >>/var/spool/cron/root crontab -l 4.关闭防火墙和selinux systemctl stop firewalld systemctl disable firewal

排查 k8s 集群 master 节点无法正常工作的问题

搭建的是 k8s 高可用集群,用了 3 台 master 节点,2 台 master 节点宕机后,仅剩的 1 台无法正常工作. 运行 kubectl get nodes 命令出现下面的错误 The connection to the server k8s-api:6443 was refused - did you specify the right host or port? 注:k8s-api 对应的就是这台 master 服务器的本机 IP 地址. 运行 netstat -lntp 命令发

使用kubeadm部署k8s集群07-扩容kube-scheduler到3节点

使用kubeadm部署k8s集群07-扩容kube-scheduler到3节点 2018/1/4 扩容 kube-scheduler 到 3 节点 连接到本节点的 apiserver [[email protected] kube-controller-manager]# cat /etc/kubernetes/manifests/kube-scheduler.yaml |grep '/etc/kubernetes' --kubeconfig=/etc/kubernetes/scheduler.

使用kubeadm部署k8s集群09-配置worker节点

使用kubeadm部署k8s集群09-配置worker节点 2018/1/4 配置 worker 节点 初始化 加入集群 切换 worker 节点连接到 apiserver 的 LB 入口 调整集群中节点角色和调度策略 初始化 /etc/hosts ### k8s master @envDev 10.10.9.67 tvm-00 10.10.9.68 tvm-01 10.10.9.69 tvm-02 k8s worker @envDev 10.10.9.74 tvm-0310.10.9.75 t

使用kubeadm部署k8s集群03-扩容kube-apiserver到3节点

使用kubeadm部署k8s集群03-扩容kube-apiserver到3节点 2018/1/3 扩容 kube-apiserver 到 3 节点 配置 kube-apiserver.yaml 分析 kube-apiserver 依赖的证书 为新节点生成专属证书 下发证书到对应的节点 确认每个节点的 apiserver 是否处于 Running 状态 配置 kube-apiserver.yaml ### 拷贝原 master 上的配置: [[email protected] ~]# mkdir

Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)

0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 master 服务器的组件有:kube-apiserver.kube-controller-manager.kube-scheduler 因此需要下载k8s master,下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGE

生产环境二进制k8s集群扩容node节点的实践

K8s二进制生产环境扩容node节点由于项目微服务也是部署在k8s集群中去维护的,所以扩容node节点也是必要的联系,扩容node节点一定要保证你整个集群的容器环境的网络都是互通的,这也是很重要的一步,这里我根据自己的经验去扩容,仅供参考首先我这里是安装的二进制方式去部署的k8s集群,进行扩容node的时候,也是非常方便的扩容node节点分为两步,第一步先将我们旧的node节点上的配置先去拷贝到我们新的节点上,第二点就是将我们的容器网络环境打通这里我是直接扩容两个node节点.第一步: 我们先去