kubeadm安装kubernetes集群

一、环境准备

1.安装配置docker

v1.11.0版本推荐使用docker v17.03,v1.11,v1.12,v1.13, 也可以使用,再高版本的docker可能无法正常使用。

#移除以前安装的docker,并安装指定的版本
[[email protected] ~]# yum remove -y docker-ce docker-ce-selinux container-selinux
[[email protected] ~]# rm -rf /var/lib/docker
[[email protected] ~]# yum install -y --setopt=obsoletes=0 docker-ce-17.03.1.ce-1.el7.centos docker-ce-selinux-17.03.1.ce-1.el7.centos
[[email protected] ~]# systemctl enable docker && systemctl restart docker

2.配置阿里yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

3.安装kubeadm等软件

[[email protected] ~]# yum install -y kubelet kubeadm kubectl
[[email protected] ~]# systemctl enable kubelet && systemctl start kubelet

4.配置系统相关参数

#关闭selinux
[[email protected] ~]# setenforce 0
#关闭swap
[[email protected] ~]# swapoff -a
[[email protected] ~]# sed -i ‘s/.*swap.*/#&/‘ /etc/fstab
#关闭防火墙
[[email protected] ~]# systemctl stop firewalld
# 配置相关参数
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
[[email protected] ~]# sysctl --system

二、master节点配置

1.因为国内无法访问Google的镜像源,所以使用自己上传到阿里的镜像源

[[email protected] ~]# docker login --username=du11589 registry.cn-shenzhen.aliyuncs.com
Password:
Login Succeeded
[[email protected] ~]# ./kube.sh
[[email protected] ~]# cat kube.sh
#!/bin/bash
images=(kube-proxy-amd64:v1.11.0
        kube-scheduler-amd64:v1.11.0
        kube-controller-manager-amd64:v1.11.0
        kube-apiserver-amd64:v1.11.0
        etcd-amd64:3.2.18
        coredns:1.1.3
        pause-amd64:3.1
        kubernetes-dashboard-amd64:v1.8.3
        k8s-dns-sidecar-amd64:1.14.8
        k8s-dns-kube-dns-amd64:1.14.8
        k8s-dns-dnsmasq-nanny-amd64:1.14.8 )
for imageName in ${images[@]} ; do
docker pull registry.cn-shenzhen.aliyuncs.com/duyj/$imageName
docker tag registry.cn-shenzhen.aliyuncs.com/duyj/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-shenzhen.aliyuncs.com/duyj/$imageName
done

docker pull quay.io/coreos/flannel:v0.10.0-amd64
docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1

2.查看下载的镜像

[[email protected] ~]# docker image ls
REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-controller-manager-amd64   v1.11.0             55b70b420785        4 weeks ago         155 MB
k8s.gcr.io/kube-scheduler-amd64            v1.11.0             0e4a34a3b0e6        4 weeks ago         56.8 MB
k8s.gcr.io/kube-proxy-amd64                v1.11.0             1d3d7afd77d1        4 weeks ago         97.8 MB
k8s.gcr.io/kube-apiserver-amd64            v1.11.0             214c48e87f58        4 weeks ago         187 MB
k8s.gcr.io/coredns                         1.1.3               b3b94275d97c        2 months ago        45.6 MB
k8s.gcr.io/etcd-amd64                      3.2.18              b8df3b177be2        3 months ago        219 MB
k8s.gcr.io/kubernetes-dashboard-amd64      v1.8.3              0c60bcf89900        5 months ago        102 MB
k8s.gcr.io/k8s-dns-sidecar-amd64           1.14.8              9d10ba894459        5 months ago        42.2 MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64     1.14.8              ac4746d72dc4        5 months ago        40.9 MB
k8s.gcr.io/k8s-dns-kube-dns-amd64          1.14.8              6ceab6c8330d        5 months ago        50.5 MB
quay.io/coreos/flannel                     v0.10.0-amd64       f0fad859c909        6 months ago        44.6 MB
k8s.gcr.io/pause-amd64                     3.1                 da86e6ba6ca1        7 months ago        742 kB
k8s.gcr.io/pause                           3.1                 da86e6ba6ca1        7 months ago        742 kB

3.执行Master节点初始化

[[email protected] ~]# kubeadm init --kubernetes-version=v1.11.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=20.0.30.105
[init] using Kubernetes version: v1.11.0
[preflight] running pre-flight checks
I0726 17:41:23.621027   65735 kernel_validator.go:81] Validating kernel version
I0726 17:41:23.621099   65735 kernel_validator.go:96] Validating kernel config
    [WARNING Hostname]: hostname "docker-5" could not be reached
    [WARNING Hostname]: hostname "docker-5" lookup docker-5 on 8.8.8.8:53: no such host
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using ‘kubeadm config images pull‘
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [docker-5 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 20.0.30.105]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [docker-5 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [docker-5 localhost] and IPs [20.0.30.105 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 39.001159 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node docker-5 as master by adding the label "node-role.kubernetes.io/master=‘‘"
[markmaster] Marking the node docker-5 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "docker-5" as an annotation
[bootstraptoken] using token: g80a49.qghzuffg3z58ykmv
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 20.0.30.105:6443 --token g80a49.qghzuffg3z58ykmv --discovery-token-ca-cert-hash sha256:8ae3e31892f930ba48eb33e96a2d86c0daf2a13847f8dc009e25e200a9cee6f6

[[email protected] ~]#

4.查看初始化情况

[[email protected] ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
[[email protected] ~]# kubectl get nodes
NAME       STATUS     ROLES     AGE       VERSION
docker-5   NotReady   master    35m       v1.11.1
[[email protected] ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                               READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-99kct           0/1       Pending   0          35m
kube-system   coredns-78fcdf6894-wsf4g           0/1       Pending   0          35m
kube-system   etcd-docker-5                      1/1       Running   0          34m
kube-system   kube-apiserver-docker-5            1/1       Running   0          35m
kube-system   kube-controller-manager-docker-5   1/1       Running   0          35m
kube-system   kube-proxy-ktks6                   1/1       Running   0          35m
kube-system   kube-scheduler-docker-5            1/1       Running   0          35m

5.配置master的网络

[[email protected] ~]# wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
[[email protected] ~]# kubectl apply -f  kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds created
[[email protected] ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                               READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-99kct           1/1       Running   0          41m
kube-system   coredns-78fcdf6894-wsf4g           1/1       Running   0          41m
kube-system   etcd-docker-5                      1/1       Running   0          40m
kube-system   kube-apiserver-docker-5            1/1       Running   0          40m
kube-system   kube-controller-manager-docker-5   1/1       Running   0          40m
kube-system   kube-flannel-ds-fmd97              1/1       Running   0          37s
kube-system   kube-proxy-ktks6                   1/1       Running   0          41m
kube-system   kube-scheduler-docker-5            1/1       Running   0          40m

三、添加node节点

1.node节点加入集群前,需要完成第一部分的环境准备

2.下载镜像

[[email protected] ~]# docker login --username=du11589 registry.cn-shenzhen.aliyuncs.com
Password:
Login Succeeded
[[email protected] ~]# ./nodekube.sh
[[email protected] ~]# cat nodekube.sh
#!/bin/bash
images=(kube-proxy-amd64:v1.11.0
        pause-amd64:3.1
        kubernetes-dashboard-amd64:v1.8.3
    heapster-influxdb-amd64:v1.3.3
    heapster-grafana-amd64:v4.4.3
    heapster-amd64:v1.4.2 )
for imageName in ${images[@]} ; do
docker pull registry.cn-shenzhen.aliyuncs.com/duyj/$imageName
docker tag registry.cn-shenzhen.aliyuncs.com/duyj/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-shenzhen.aliyuncs.com/duyj/$imageName
done

docker pull quay.io/coreos/flannel:v0.10.0-amd64
docker tag k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1

3.查看下载的镜像

[[email protected] ~]# docker image ls
REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy-amd64             v1.11.0             1d3d7afd77d1        4 weeks ago         97.8 MB
k8s.gcr.io/kubernetes-dashboard-amd64   v1.8.3              0c60bcf89900        5 months ago        102 MB
quay.io/coreos/flannel                  v0.10.0-amd64       f0fad859c909        6 months ago        44.6 MB
k8s.gcr.io/pause-amd64                  3.1                 da86e6ba6ca1        7 months ago        742 kB
k8s.gcr.io/pause                        3.1                 da86e6ba6ca1        7 months ago        742 kB
k8s.gcr.io/heapster-influxdb-amd64      v1.3.3              577260d221db        10 months ago       12.5 MB
k8s.gcr.io/heapster-grafana-amd64       v4.4.3              8cb3de219af7        10 months ago       152 MB
k8s.gcr.io/heapster-amd64               v1.4.2              d4e02f5922ca        11 months ago       73.4 MB

4.加入节点

[[email protected] ~]# kubeadm join 20.0.30.105:6443 --token g80a49.qghzuffg3z58ykmv --discovery-token-ca-cert-hash sha256:8ae3e31892f930ba48eb33e96a2d86c0daf2a13847f8dc009e25e200a9cee6f6
[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
 1. Run ‘modprobe -- ‘ to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0726 19:17:28.277627   36641 kernel_validator.go:81] Validating kernel version
I0726 19:17:28.277705   36641 kernel_validator.go:96] Validating kernel config
[discovery] Trying to connect to API Server "20.0.30.105:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://20.0.30.105:6443"
[discovery] Requesting info from "https://20.0.30.105:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "20.0.30.105:6443"
[discovery] Successfully established connection with API Server "20.0.30.105:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "docker-2" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes‘ on the master to see this node join the cluster.

5.在master上查看node加入情况

[[email protected] ~]# kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
docker-2   Ready     <none>    3m        v1.11.1
docker-5   Ready     master    4h        v1.11.1
[[email protected] ~]# kubectl get pods -n kube-system -o wide
NAME                               READY     STATUS    RESTARTS   AGE       IP            NODE
coredns-78fcdf6894-99kct           1/1       Running   0          4h        10.244.0.2    docker-5
coredns-78fcdf6894-wsf4g           1/1       Running   0          4h        10.244.0.3    docker-5
etcd-docker-5                      1/1       Running   0          4h        20.0.30.105   docker-5
kube-apiserver-docker-5            1/1       Running   0          4h        20.0.30.105   docker-5
kube-controller-manager-docker-5   1/1       Running   0          4h        20.0.30.105   docker-5
kube-flannel-ds-c7rb4              1/1       Running   0          7m        20.0.30.102   docker-2
kube-flannel-ds-fmd97              1/1       Running   0          3h        20.0.30.105   docker-5
kube-proxy-7tmtg                   1/1       Running   0          7m        20.0.30.102   docker-2
kube-proxy-ktks6                   1/1       Running   0          4h        20.0.30.105   docker-5
kube-scheduler-docker-5            1/1       Running   0          4h        20.0.30.105   docker-5

原文地址:http://blog.51cto.com/lullaby/2150610

时间: 2024-10-10 06:11:54

kubeadm安装kubernetes集群的相关文章

Kubernetes(K8s) 安装(使用kubeadm安装Kubernetes集群)

概述: 这篇文章是为了介绍使用kubeadm安装Kubernetes集群(可以用于生产级别).使用了Centos 7系统. PS: 篇幅有点长,但是比较详细.比较全面 一.Centos7 配置说明 1.1   Firewalld(防火墙) CentOS Linux 7 默认开起来防火墙服务(firewalld),而Kubernetes的Master与工作Node之间会有大量的网络通信,安全的做法是在防火墙上配置Kbernetes各组件(api-server.kubelet等等)需要相互通信的端口

使用kubeadm部署kubernetes集群

使用kubeadm部署kubernetes集群 通过docker,我们可以在单个主机上快速部署各个应用,但是实际的生产环境里,不会单单存在一台主机,这就需要用到docker集群管理工具了,本文将简单介绍使用docker集群管理工具kubernetes进行集群部署. 1 环境规划与准备 本次搭建使用了三台主机,其环境信息如下:| 节点功能 | 主机名 | IP || ------|:------:|-------:|| master | master |192.168.1.11 || slave1

CentOS 7.5 使用 yum 源安装 Kubernetes 集群(二)

一.安装方式介绍 1.yum 安装 目前CentOS官方已经把Kubernetes源放入到自己的默认 extras 仓库里面,使用 yum 安装,好处是简单,坏处也很明显,需要官方更新 yum 源才能获得最新版本的软件,而所有软件的依赖又不能自己指定,尤其是你的操作系统版本如果低的话,使用 yum 源安装的 Kubernetes 的版本也会受到限制,通常会低于官方很多版本,我安装的时候目前官方版本为1.12,而 yum 源中的版本为1.5.2. 2.二进制安装 使用二进制文件安装,好处是可以安装

Kubeadm部署Kubernetes集群

Kubeadm部署Kubernetes1.14.1集群 原理kubeadm做为集群安装的"最佳实践"工具,目标是通过必要的步骤来提供一个最小可用的集群运行环境.它会启动集群的基本组件以及必要的附属组件,至于为集群提供更丰富功能(比如监控,度量)的组件,不在其安装部署的范围.在环境节点符合其基本要求的前提下,kubeadm只需要两条基本命令便可以快捷的将一套集群部署起来.这两条命令分别是: kubeadm init:初始化集群并启动master相关组件,在计划用做master的节点上执行

kubeadm搭建kubernetes集群

一.环境准备首先我的三个ubuntu云主机的配置如下 cpu数量 内存 磁盘 Ubuntu 2 8G 20G 18.04LTS 而且能保证三台机器都能连接外网这里的所有命令都是在root用户下操作的二.安装 1.在所有的节点上安装Docker和kubeadm [email protected]:~# apt-get install curl -y [email protected]:~# curl -s https://packages.cloud.google.com/apt/doc/apt-

用kubeadm安装k8s集群

1.准备 1.1系统配置 在安装之前,需要先做如下准备.三台CentOS主机如下: 配置yum源(使用腾讯云的) 替换之前先备份旧配置 mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup centos各版本的源配置列表 centos5 wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/

kubeadm安装kubernets集群

双12弄了两台腾讯云和百度云机器,组建k8s集群时需要服务器间组成内网环境: 在服务器组成内网后就可以安装kubernets集群了 因只是自己实验需要,所以服务器使用openxxx跨云组建的内网,各位在安装的时候建议还是使用同一内网环境,并使用2v4G以上服务器推介配置 大家的系统环境及各种安装包尽量使用同一个版本 1,服务器环境: 软件版本 Kubernetes v1.17.0   Docker version 19.03.5 master: 腾讯云1V2g,CentOS Linux rele

Centos7上安装Kubernetes集群部署docker

一.安装前准备 1.操作系统详情 需要三台主机,都最小化安装 centos7.3,并update到最新 cat /etc/redhat-release CentOS Linux release 7.3.1611 (Core)  角色主机名IP Master      master192.168.1.14 node1    slave-1192.168.1.15 node2slave-2192.168.1.16 2.在每台主机上关闭firewalld改用iptables 输入以下命令,关闭fire

使用kubeadm安装k8s集群故障处理三则

最近在作安装k8s集群,测试了几种方法,最终觉得用kubeadm应该最规范. 限于公司特别的网络情况,其安装比网上不能访问google的情况还要艰难. 慢慢积累经验吧. 今天遇到的三则故障记下来作参考. 当然,所有方法都是看了log输出后,从网上搜索的方法. =============== Q,如何让kubeadm在安装过程中不联网? A:记得在kubeadm init过程中增加参数 --kubernetes-version=v1.7.0 Q,kubelet cgroup driver参数不一致