kubeadm部署单master Kuberntes集群

本文参考kubernetes docs 使用kubeadm创建single master的Kuberntes集群

虚机Centos75
Kubernetes Yum Repo采用国内阿里源
版本 v1.14.1 (该版本发布时间2019-04-09)
Pod网络采用Calico

1 配置镜像源

以yum为例,Ubuntu可以采用中科大ustc的源

官方Google源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

国内阿里源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2 初始化主机

# 2.1 设置 selinux, 可以直接disable,也可以设置为permissive
## 我一般会关掉firewalld
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# 2.2 关闭swap,不关闭的话 需要设置kubelet的参数 --fail-swap-on=false (默认为true)
swapoff -a
## 查看swap
free -m
## 编辑 /etc/fstab 取消开机加载swap

# 2.3 设置网络
## 2.3.1 加载内核模块 (可以直接进行2.3.3)
modprobe br_netfilter
lsmod | grep br_netfilter
## 2.3.2 配置网络等系统参数
cat <<EOF >  /etc/sysctl.d/k8s.conf
vm.swappiness = 0
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

## 2.3.3 配置内核启动自动加载
将需要加载的内核写入 /etc/modules-load.d 目录下某个文件中,注意文件权限

## 2.3.4 安装ipvs, 取代 iptables
yum -y install ipvsadm
cat > /etc/modules-load.d/ipvs  << EOF
br_netfilter
ip_vs_rr
ip_vs_wrr
ip_vs_sh
ip_vs
EOF
for m in `cat /etc/modules-load.d/ipvs`; do echo $m; done

3 下载kubernetes文件

# 查看当前yum提供的kubeadm版本,选择需要安装的版本
yum search --showduplicates kubeadmin
# 直接安装最新版
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# 如果用docker,kubelet会自动探测docker使用的cgroup driver,一般是cgroupfs,安装docker如下
docker version
yum install -y docker --disableexcludes=docker-ce
> 阿里有docker源,配置如下
> curl -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 如果使用其他cri 需要修改kubelet参数文件设置合适的cgroup driver
> KUBELET_EXTRA_ARGS=--cgroup-driver=whatyouwant

# 配置开机启动
systemctl enable --now kubelet (--now 同时启动kubelet)
> 此时 systemctl status kubelet 会得到kubelet无法启动,因为 /var/lib/kubelet/config.yaml 文件不存在
> 可以暂时不处理, 后面 kubeadm init 命令会创建该文件

4 master 节点

kubeadm  init --pod-network-cidr=192.169.0.0/16 --image-repository registry.aliyuncs.com/google_containers
> --pod-network-cidr 注意不要和现有主机上的各网卡的网络冲突
> --kubernetes-version v1.14.1  可用该参数制定版本,需要比刚才下载的kubelet的版本低,这样才能兼容
> --apiserver-advertise-address= 指定服务监听ip

从该命令的输出, 可以看到创建了数字证书,设置/var/lib/kubelet/config.yaml 等配置文件各关键步骤

同时:

  • 1 提示需要安装pod network
  • 2 设置kubectl命令的配置文件的方法
  • 3 添加其他节点所使用的命令及参数

输出如下:

I0501 19:53:16.073098   11685 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0501 19:53:16.073210   11685 version.go:97] falling back to the local client version: v1.14.1
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [c75a.shared localhost] and IPs [10.211.55.7 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [c75a.shared localhost] and IPs [10.211.55.7 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [c75a.shared kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.211.55.7]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.003746 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node c75a.shared as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node c75a.shared as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: uurhat.duj8060jmku42htb
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.211.55.7:6443 --token uurhat.duj8060jmku42htb     --discovery-token-ca-cert-hash sha256:2a3487a02927c7c496a7516af076ac3ad16e6b3721ee6c6a025bb87beace89e2

完成此步后,我们可以用 kubectl get nodes查看当前集群的节点信息, 以及 kubectl describe node 查看 node 的 label信息,可以发现master节点被打上了node-role.kubernetes.io/master label

5 pod network

以calico为例,calico自身也可以单独安装在主机上,而非Kuberntes集群上
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

如果kubeadm init时采用的--pod-network-cidr不是192.168.0.0/16 则需要先下载calico.yaml 修改为正确的配置

查看calico pods kubectl get pods --all-namespaces

6 添加node节点

root用户执行上面kubeadm init输出的命令, 即:
kubeadmin join apiserver_ip:apiserver_port --token token --discovery-token-ca-cert-hash sha256:<hash>

  1. 如果apiserver_ip是ipv6,则采用如下格式配置ip和端口: [fd00::101]:6443
  2. 如果token过期或忘记,可用命令查看或创建 kubeadm token list or kubeadm token create
  3. 如果忘记证书hash,可用如下命令:
 openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa - pubin -outform der 2>/dev/null |   openssl dgst -sha256 -hex | sed 's/^.* //'

命令如下:

kubeadm join 10.211.55.7:6443 --token uurhat.duj8060jmku42htb     --discovery-token-ca-cert-hash sha256:2a3487a02927c7c496a7516af076ac3ad16e6b3721ee6c6a025bb87beace89e2

输出如下:

[email protected] ~# kubeadm join 10.211.55.7:6443 --token uurhat.duj8060jmku42htb \                                                  130
\     --discovery-token-ca-cert-hash sha256:2a3487a02927c7c496a7516af076ac3ad16e6b3721ee6c6a025bb87beace89e2
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

然后在前面的master节点上执行kubectl get nodes, 新节点需要1分钟左右注册到master集群,时间取决于网络和主机性能

7 删除节点

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

为便于下次再次添加该节点需要执行:kubeadm reset
网络需要单独清理,清理之前可以先查看是否有除k8s以外的网络
iptables

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

ipvs

ipvsadm -C

8 部署Dashbord Add-on

8.1 执行命令

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

8.2 这个需要FQ下载容器镜像 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

通过该文(引用3)办法,已下载镜像,可以直接使用

docker pull registry.cn-hangzhou.aliyuncs.com/xw9/kubernetes-dashboard-amd64:v1.10.1
docker tag registry.cn-hangzhou.aliyuncs.com/xw9/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

8.3 配置集群外访问
通过kubectl get svc -n kube-system 可以看到kubernetes-dashboard服务了

curl -vk https://10.110.167.233 可以看到访问成功

为了集群外访问,配置该服务为NodePort类型,也可以用Ingress等方式

kubectl edit svc kubernetes-dashboard -n kube-system
# 将spec中的type由ClusterIP改为NodePort
kubectl get svc -n kube-system # 查看使用的端口号,使用https协议即可访问

打开界面,可以看到需要上传kubeconfig文件或输入令牌

8.4 创建一个管理员用户(也可以将kubernetes-dashboard加入到cluster-admin角色中)

cat > admin-user.yaml << EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF

kubectl apply -f admin-user.yaml

# 获取令牌
kubectl describe secrets `kubectl get sa admin-user -o 'jsonpath={.secrets[0].name}' -n kube-system` -n kube-system | awk '$1=="token:"{print $2}'

10 运行Nginx

kubectl run nginx --image=nginx:1.16.0

11 备注

IPVS已在GA了,但安装完发现,仍然使用的是iptables,还需显式指定才能生效
kubeadm init --feature-gates=SupportIPVSProxyMode=true
待下次测试

Ref 本文参考致谢

1 kubernetes
2 zzphper blog 使用kubeadm快速部署kubernetes
3 k8s镜像推送国内
4 dashbord access
5 kubernetes dashboard user

本文为?xiaowei原创,基于CC BY-NC-SA 4.0协议公开许可, 2019-05-01

五一Happy

原文地址:https://www.cnblogs.com/i2u9/p/kubernetes-kubeadmin.html

时间: 2024-10-25 20:08:18

kubeadm部署单master Kuberntes集群的相关文章

kubeadm部署kubernetes 1.12集群

kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践. 在Kubernetes的文档Creating a single master cluster with kubeadm中已经给出了目前kubeadm的主要特性已经处于beta状态了,在2018年将进入GA状态,说明kube

kubeadm部署k8s-v1.11.1集群

一.环境准备 master:192.168.0.8,kubelet,kubeadm,kubectl:apiserver,manager,scheduler,proxy,pause,etcd,coredns node01:192.168.0.9,kubeadm,kubelet node02:192.168.0.10,kubeadm,kubelet 节点网络:192.168.0.0/24 Service网络:10.96.0.0/12 Pod网络:10.244.0.0/16(flannel默认) 1.

用 kubeadm 部署生产级 k8s 集群

概述 kubeadm?已?持集群部署,且在1.13?版本中?GA,?持多?master,多?etcd?集群化部署,它也是官?最为推荐的部署?式,?来是由它的?sig?组来推进的,?来?kubeadm?在很多??确实很好的利?了?kubernetes?的许多特性,接下来?篇我们来实践并了解下它的魅?. ?标 1. 通过 kubeadm 搭建?可? kubernetes 集群,并新建管理?户 2. 为后续做版本升级演示,此处使?1.13.1版本,到下?篇再升级到 v1.14 3. kubeadm 的

kubeadm部署高可用K8S集群(v1.14.0)

一. 集群规划 主机名 IP 角色 主要插件 VIP 172.16.1.10 实现master高可用和负载均衡 k8s-master01 172.16.1.11 master kube-apiserver.kube-controller.kube-scheduler.kubelet.kube-proxy.kube-flannel.etcd k8s-master02 172.16.1.12 master kube-apiserver.kube-controller.kube-scheduler.k

Kubeadm部署Kubernetes1.14.3集群

一.环境说明 主机名 IP地址 角色 系统 node11 192.168.11.11 k8s-master Centos7.6node12 192.168.11.12 k8s-node Centos7.6node13 192.168.11.13 k8s-node Centos7.6 注意:官方建议每台机器至少双核2G内存,同时需确保MAC和product_uuid唯一(参考下面的命令查看) 二.环境配置 以下命令在三台主机上均需运行 1.设置阿里云yum源(可选) curl -o /etc/yu

Kubernetes二进制部署——多master节点集群部署(2)

前言: 接上一篇单节点部署(1)部署多节点 部署环境 负载均衡Nginx1:192.168.13.128/24Nginx2:192.168.13.129/24Master节点master1:192.168.13.131/24 kube-apiserver kube-controller-manager kube-scheduler etcdmaster2:192.168.13.130/24 kube-apiserver kube-controller-manager kube-scheduler

CentOS7.x通过kubeadmin安装部署Kubernetes1.5.2集群

(一).环境 IP地址 系统 功能 192.168.4.21 CentOS7.4 Master 192.168.4.20 CentOS7.4 node1 192.168.4.19 CentOS7.4 node2 (二).基础环境安装配置(每一台服务器都要执行) 1.关闭防火墙 [[email protected] ~]# systemctl stop firewalld [[email protected] ~]# systemctl disable firewalld 2.创建/etc/sys

Kubernetes 部署 Nebula 图数据库集群

Kubernetes 是什么 Kubernetes 是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes 的目标是让部署容器化的应用简单并且高效,Kubernetes 提供了应用部署,规划,更新,维护的一种机制.Kubernetes 在设计结构上定义了一系列的构建模块,其目的是为了提供一个可以部署.维护和扩展应用程序的机制,组成 Kubernetes 的组件设计概念为松耦合和可扩展的,这样可以使之满足多种不同的工作负载.可扩展性在很大程度上由 Kubernetes API

通过Rancher部署并扩容Kubernetes集群基础篇二

接上一篇通过Rancher部署并扩容Kubernetes集群基础篇一 7. 使用ConfigMap配置redis https://github.com/kubernetes/kubernetes.github.io/blob/master/docs/user-guide/configmap/redis/redis-config redis-config maxmemory 2mb     maxmemory-policy allkeys-lru # kubectl create configma