CentOS7部署Kubernetes集群

CentOS7部署Kubernetes集群

简介

Kubernetes是什么?

Kubernetes一个用于容器集群的自动化部署、扩容以及运维的开源平台。

通过Kubernetes,你可以快速有效地响应用户需求:

a、快速而有预期地部署你的应用

b、极速地扩展你的应用

c、无缝对接新的应用功能

d、节省资源,优化硬件资源的使用

我们希望培育出一个组件及工具的生态,帮助大家减轻在公有云及私有云上运行应用的负担。

Kubernetes特点:

a、可移植: 支持公有云,私有云,混合云,多重云(multi-cloud)

  b、可扩展: 模块化, 插件化, 可挂载, 可组合

c、自愈: 自动布置,自动重启,自动复制,自动扩展

Kubernetes始于Google 2014 年的一个项目。 Kubernetes的构建基于Google十多年运行大规模负载产品的经验,同时也吸取了社区中最好的意见和经验。

Kubernetes设计架构:

高清图地址:https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.2/docs/design/architecture.png

Kubernetes主要由以下几个核心组件组成:

a、etcd保存了整个集群的状态;

  b、apiserver提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;

c、controller manager负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;

  d、scheduler负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;

  e、kubelet负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;

  d、Container runtime负责镜像管理以及Pod和容器的真正运行(CRI);

e、kube-proxy负责为Service提供cluster内部的服务发现和负载均衡;

除了核心组件,还有一些推荐的Add-ons:

f、kube-dns负责为整个集群提供DNS服务

g、Ingress Controller为服务提供外网入口

h、Heapster提供资源监控

i、Dashboard提供GUI

j、Federation提供跨可用区的集群

k、Fluentd-elasticsearch提供集群日志采集、存储与查询

****** 具体参考:https://www.kubernetes.org.cn/docs

一、环境介绍

Kubernetes包提供了一些服务:kube-apiserver, kube-scheduler, kube-controller-manager,kubelet,

kube-proxy。这些服务通过systemd进行管理,配置信息都集中存放在一个地方:/etc/kubernetes。我们将会把这些服务运行到不同的主机上。第一台主机,centosmaster,将是Kubernetes 集群的master主机。这台机器上将运行kube-apiserver, kubecontroller-manager和kube-scheduler这几个服务,此外,master主机上还将运行etcd。其余的主机,fed-minion,将是从节点,将会运行kubelet, proxy和docker

操作系统信息:CentOS 7 64位

Open vSwitch版本信息:2.5.0

Kubernetes版本信息:v1.5.2

Etcd版本信息:3.1.9

Docker版本信息:1.12.6

服务器信息:

192.168.80.130  k8s-master

192.168.80.131  k8s-node1

192.168.80.132  k8s-node2

二、部署前准备

1、设置免密登录

[Master]

[[email protected] ~]# ssh-keygen

[[email protected] ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub k8s-node1

[[email protected] ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub k8s-node2

2、所有机器上操作

a、添加hosts

[[email protected] ~]# 192.168.80.130  k8s-master

[[email protected] ~]# 192.168.80.131  k8s-node1

[[email protected] ~]# 192.168.80.132  k8s-node2

  b、同步时间

[[email protected] ~]# yum -y lrzsz git wget python-devel ntp net-tools curl cmake epel-release rpmdevtools openssl-devel kernel-devel gcc redhat-rpm-config bridge-utils

[[email protected] ~]# yum groupinstall "Development Tools" -y

[[email protected] ~]# cp -Rf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

[[email protected] ~]# ntpdate 133.100.11.8

[[email protected] ~]# sed -i ‘s#ZONE="America/New_York"#ZONE="Asia/Shanghai"#g‘ /etc/sysconfig/clock

[[email protected] ~]# hwclock -w

[[email protected] ~]# date -R

3、在2个Node节点安装Open Switch,这里以node1为例安装

a、安装openVswitch

[[email protected] ~]# yum -y install lrzsz git wget python-devel ntp net-tools curl cmake epel-release rpmdevtools openssl-devel kernel-devel gcc redhat-rpm-config bridge-utils

[[email protected] ~]# yum groupinstall "Development Tools" -y

[[email protected] ~]# mkdir -p ~/rpmbuild/SOURCES

[[email protected] ~]# wget http://openvswitch.org/releases/openvswitch-2.5.0.tar.gz

[[email protected] ~]# cp openvswitch-2.5.0.tar.gz ~/rpmbuild/SOURCES/

[[email protected] ~]# tar xfz openvswitch-2.5.0.tar.gz

[[email protected] ~]# sed ‘s/openvswitch-kmod, //g‘ openvswitch-2.5.0/rhel/openvswitch.spec > openvswitch-2.5.0/rhel/openvswitch_no_kmod.spec

[[email protected] ~]# rpmbuild -bb --nocheck ~/openvswitch-2.5.0/rhel/openvswitch_no_kmod.spec

[[email protected] ~]# yum -y localinstall ~/rpmbuild/RPMS/x86_64/openvswitch-2.5.0-1.x86_64.rpm

[[email protected] ~]# modprobe openvswitch && systemctl start openvswitch.service

b、配置GRE遂道

[Node1]

[[email protected] ~]# ovs-vsctl add-br obr0

****** 接下来建立gre,并将新建的gre0添加到obr0,在node1上执行如下命令,

[[email protected] ~]# ovs-vsctl add-port obr0 gre0 -- set Interface gre0 type=gre options:remote_ip=192.168.80.132

****** 注:remote_ip=node2_IP

[Node2]

[[email protected] ~]# ovs-vsctl add-br obr0

****** 接下来建立gre,并将新建的gre0添加到obr0,在node1上执行如下命令,

[[email protected] ~]# ovs-vsctl add-port obr0 gre0 -- set Interface gre0 type=gre options:remote_ip=192.168.80.131

****** 注:remote_ip=node1_IP

****** 至此,node1和node2之间的隧道已经建立。然后我们在node1和node2上创建网桥br0替代Docker默认的docker0,设置node1的br0的地址:172.16.1.1/24, node2的br0的地址:172.16.2.1/24,并添加obr0到br0接口,以下命令均在node1和 node2上执行,

这里以node1为例执行:

[[email protected] ~]# brctl addbr br0               //创建linux bridge

[[email protected] ~]# brctl addif br0 obr0          //添加obr0为br0的接口

[[email protected] ~]# ip link set dev docker0 down   //设置docker0为down状态

[[email protected] ~]# ip link del dev docker0        //删除docker0

****** 为了使用br0在重启后也生效,我们需要在/etc/sysconfig/network-scripts下创建网卡文件ifcfg-br0

[[email protected] ~]# vim /etc/sysconfig/network-scripts/ifcfg-br0

DEVICE=br0

ONBOOT=yes

BOOTPROTO=static

IPADDR=172.16.1.1

NETMASK=255.255.255.0

GATEWAY=172.16.1.0

USERCTL=no

TYPE=Bridge

IPV6INIT=no

******** Node2上也需要执行上面命令 *******

    c、两台node互添路由信息:

    [node1]

[[email protected] ~]# cd /etc/sysconfig/network-scripts/

[[email protected] ~]# ls ./

[[email protected] ~]# ifcfg-br0   ifcfg-ens33   ifcfg-lo

[[email protected] ~]# vim route-ens33

172.16.2.0/24 via 192.168.80.132 dev ens33

******注:ens33是node1的物理网卡名称,如果你的是eth0,那么名称为:route-eth0

[[email protected] ~]# service network restart

[node2]

[[email protected] ~]# /etc/sysconfig/network-scripts/

[[email protected] ~]# vim route-ens33

172.16.1.0/24 via 192.168.80.131 dev ens33

[[email protected] ~]# service network restart

d、测试gre遂道是否连通

[[email protected] ~]# ping -w 4 172.16.2.1

PING 172.16.2.1 (172.16.2.1) 56(84) bytes of data.

64 bytes from 172.16.2.1: icmp_seq=1 ttl=64 time=0.652 ms

64 bytes from 172.16.2.1: icmp_seq=2 ttl=64 time=0.281 ms

64 bytes from 172.16.2.1: icmp_seq=3 ttl=64 time=0.374 ms

64 bytes from 172.16.2.1: icmp_seq=4 ttl=64 time=0.187 ms

--- 172.16.2.1 ping statistics ---

4 packets transmitted, 4 received, 0% packet loss, time 3002ms

rtt min/avg/max/mdev = 0.187/0.373/0.652/0.174 ms

三、部署Kubernetes

1、在Master机器上安装

    [master]

[[email protected] ~]# yum -y install etcd kubernetes

2、配置Etcd

Etcd默认的监听端口是4001,在这里修改以下信息

a、配置etcd.conf

[[email protected] ~]# cp /etc/etcd/etcd.conf /etc/etcd/etcd.conf_bak

[[email protected] ~]# vim /etc/etcd/etcd.conf

# [member]

ETCD_NAME="etcd-master"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://k8s-master:2380"

ETCD_INITIAL_CLUSTER="etcd-master=http://k8s-master:2380"

ETCD_INITIAL_CLUSTER_STATE="new"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_ADVERTISE_CLIENT_URLS="http://k8s-master:2379,http://k8s-master:4001"

b、配置etcd.service

[[email protected] ~]# cp /usr/lib/systemd/system/etcd.service /usr/lib/systemd/system/etcd.service_bak

[[email protected] ~]# vim /usr/lib/systemd/system/etcd.service

ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\""

****** 注:只修改[service]里的ExecStart

[[email protected] ~]# mkdir -p /export/etcd

[[email protected] ~]# chown -R etcd:etcd /export/etcd

c、启动etcd服务

[[email protected] ~]# systemctl daemon-reload

[[email protected] ~]# systemctl enable etcd.service

[[email protected] ~]# systemctl start etcd.service

d、验证是否成功

[[email protected] ~]# etcdctl member list

ffe21a7812eb7c5f: name=etcd-master peerURLs=http://k8s-master:2380 clientURLs=http://k8s-master:2379,http://k8s-master:4001 isLeader=true

3、配置Kubernetes

a、apiserver配置

[[email protected] ~]# cd /etc/kubernetes/

[[email protected] ~]# cp apiserver apiserver_bak

[[email protected] ~]# vim /etc/kubernetes/apiserver

###

# kubernetes system config

#

# The following values are used to configure the kube-apiserver

#

# The address on the local server to listen to.

#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.

# KUBE_API_PORT="--port=8080"

KUBE_API_PORT="--port=8080"

# Port minions listen on

# KUBELET_PORT="--kubelet-port=10250"

KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster

#KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

KUBE_ETCD_SERVERS="--etcd-servers=http://k8s-master:2379"

# Address range to use for services

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!

KUBE_API_ARGS=""

b、配置config

[[email protected] ~]# cp config config_bak

[[email protected] ~]# vim /etc/kubernetes/config

###

# kubernetes system config

#

# The following values are used to configure various aspects of all

# kubernetes services, including

#

#   kube-apiserver.service

#   kube-controller-manager.service

#   kube-scheduler.service

#   kubelet.service

#   kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug

KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER="--master=http://k8s-master:8080"

#******** add etcd server info ********#

# Etcd Server Configure

KUBE_ETCD_SERVERS="--etcd-servers=http://k8s-master:4001"

  4、启动服务

[[email protected] ~]#

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do

systemctl restart $SERVICES

systemctl enable $SERVICES

systemctl status $SERVICES

done

  5、Node机器只需要Kubernetes

    [所有的node节点]

****** 这里以node1为例:

[[email protected] ~]# yum -y install kubernetes

****** 安装k8s会自动安装docker

6、配置Node节点的Kubernetes

[[email protected] ~]# cd /etc/kubernetes

[[email protected] ~]# cp kubelet kubelet_bak

[[email protected] ~]# vim /etc/kubernetes/kubelet

###

# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)

#KUBELET_ADDRESS="--address=127.0.0.1"

KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on

# KUBELET_PORT="--port=10250"

KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname

#KUBELET_HOSTNAME="--hostname-override=127.0.0.1"

KUBELET_HOSTNAME="--hostname-override=k8s-node"

# location of the api-server

#KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"

KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"

# pod infrastructure container

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!

KUBELET_ARGS=""

7、启动Node节点Kubernetes服务

[[email protected] ~]#

for SERVICES in kube-proxy kubelet docker; do

systemctl restart $SERVICES

systemctl enable $SERVICES

systemctl status $SERVICES

done

  8、测试

  [master]

a、查看node节点:

[r[email protected] ~]# kubectl get nodes

NAME        STATUS    AGE

k8s-node1   Ready     1h

k8s-node2   Ready     1h

b、创建 nginx Pod:

[[email protected] ~]# mkdir /export/kube_containers

[[email protected] ~]# cd /export/kube_containers

[[email protected] ~]# vim nginx.yaml

apiVersion: v1

kind: Pod

metadata:

name: nginx

lables:

name: nginx

spec:

containers:

- resources:

limits:

cpu: 1

image: nginx

name: nginx

ports:

- containerPorts: 80

name: nginx

c、创建 Mysql Pod资源文件

[[email protected] ~]# vim mysql.yaml

apiVersion: v1

kind: Pod

metadata:

name: mysql

labels:

name: mysql

spec:

containers:

- resources:

limits :

cpu: 0.5

image: mysql

name: mysql

env:

- name: MYSQL_ROOT_PASSWORD

value: rootpwd

ports:

- containerPort: 3306

name: mysql

volumeMounts:

# name must match the volume name below

- name: mysql-persistent-storage

# mount path within the container

mountPath: /var/lib/mysql

volumes:

- name: mysql-persistent-storage

cinder:

volumeID: bd82f7e2-wece-4c01-a505-4acf60b07f4a

fsType: ext4

    d、导入资源

[[email protected] ~]# kubectl create -f mysql.yaml

    e、查看资源状态

[[email protected] ~]# kubectl get po -o wide

NAME                     READY     STATUS              RESTARTS   AGE       IP        NODE

mysql                    0/1       ContainerCreating   0          5m        <none>    k8s-node2

nginx-controller-fnttl   0/1       ContainerCreating   0          5m        <none>    k8s-node2

nginx-controller-kb4hj   0/1       ContainerCreating   0          5m        <none>    k8s-node1

****** 这里的STATUS的状态是:ContainerCreating,因为在此时node节点在下载images,稍等片刻就可以,如果不放心可以使用nmon监控下流量。

****** 再次查看:

[[email protected] ~]# kubectl get po -o wide

NAME                     READY     STATUS    RESTARTS   AGE       IP           NODE

mysql                    1/1       Running   0          19m       172.17.0.3   k8s-node2

nginx-controller-fnttl   1/1       Running   0          19m       172.17.0.2   k8s-node2

nginx-controller-kb4hj   1/1       Running   0          19m       172.17.0.2   k8s-node1

×××××× 这里已经部署在运行了,所以是Running。Status开始是Ready。

  9、查看日志

以Master机器日志为例:

[[email protected] ~]# tail -f /var/log/messages | grep kube

Dec 11 09:54:11 192 kube-scheduler: I1211 09:54:11.380994   20445 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"mysql", UID:"2f192467-a030-11e5-8a55-000c298cfaa1", APIVersion:"v1", ResourceVersion:"3522", FieldPath:""}): reason: ‘scheduled‘ Successfully assigned mysql to dslave

四、常见错误及解决方案

1、[错误1]

[[email protected] ~]# kubectl create -f mysql.yaml

Error from server (ServerTimeout): error when creating "mysql.yaml": No API token found for service account "default", retry after the token is automatically created and added to the service account

    [解决方案]

[[email protected] ~]# vim /etc/kubernets/apiserver

#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

修改为:

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"

[重启服务]

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do

systemctl restart $SERVICES

systemctl enable $SERVICES

systemctl status $SERVICES

done

2、[错误2]

  在部署Pod时,在Node机器日志中报错

Dec 11 09:30:22 dslave kubelet: E1211 09:30:22.745867   99650 manager.go:1557] Failed to create pod infra container: image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request.  details: (Network timed out while trying to connect to http://gcr.io/v1/repositories/google_containers/pause/images. You may want to check your internet connection or if you are behind a proxy.); Skipping pod "mysql_default"

Dec 11 09:30:22 dslave kubelet: E1211 09:30:22.955470   99650 pod_workers.go:111] Error syncing pod bcbb3b8a-a02a-11e5-8a55-000c298cfaa1, skipping: image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request.  details: (Network timed out while trying to connect to http://gcr.io/v1/repositories/google_containers/pause/images. You may want to check your internet connection or if you are behind a proxy.)

[解决方案]

原因:Google被墙了,下载资源包到本地

http://www.sunmite.com/linux/installing-kubernetes-cluster-on-centos7-to-manage-pods-and-services/attachment/pause-0-8-0/

在Node节点导入

docker load --input pause-0.8.0.tar

至此,环境已经全部搭建完毕,如有问题请联系:[email protected]

时间: 2024-11-08 18:19:33

CentOS7部署Kubernetes集群的相关文章

Centos7部署Kubernetes集群+flannel

centos7 部署Kubernetes+flannel https://www.cnblogs.com/zhenyuyaodidiao/p/6500830.html kubernetes集群部署DashBoard http://www.cnblogs.com/zhenyuyaodidiao/p/6500897.html 原文地址:http://blog.51cto.com/lookingdream/2094162

Centos7部署kubernetes集群CA证书创建和分发(二)

1.解压软件包 [[email protected] ~]# cd /usr/local/src/ [[email protected] src]# ls k8s-v1.10.1-manual.zip [[email protected] src]# unzip k8s-v1.10.1-manual.zip [[email protected] src]# cd k8s-v1.10.1-manual [[email protected] k8s-v1.10.1-manual]# cd k8s-v

使用kubeadm部署kubernetes集群

使用kubeadm部署kubernetes集群 通过docker,我们可以在单个主机上快速部署各个应用,但是实际的生产环境里,不会单单存在一台主机,这就需要用到docker集群管理工具了,本文将简单介绍使用docker集群管理工具kubernetes进行集群部署. 1 环境规划与准备 本次搭建使用了三台主机,其环境信息如下:| 节点功能 | 主机名 | IP || ------|:------:|-------:|| master | master |192.168.1.11 || slave1

二进制部署 Kubernetes 集群

二进制部署 Kubernetes 集群 提供的几种Kubernetes部署方式 minikube Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用.不能用于生产环境. kubeadm Kubeadm也是一个工具,提供kubeadm init和kubeadm join指令,用于快速部署Kubernetes集群. 二进制包 从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群. 小结:生产环境中部署Kub

安装部署Kubernetes集群实战

kubernetes概述: Kubernetes是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制.Kubernetes是Google 2014年创建管理的,是Google 10多年大规模容器管理技术Borg的开源版本. 通过kubernetes可以实现的功能: 快速部署应用 快速扩展应用 无缝对接新的应用功能 节省资源,优化硬件资源的使用 我们的目

Shell脚本快速部署Kubernetes集群系统

本文紧跟上节所讲的手动部署Kubernetes管理Docker篇所写,本篇主要内容利用Shell脚本完成快速部署Kubernetes集群.上节博文看过的朋友也能感觉到部署过程相对比较简单,那么,出于简化工作流程,推进运维自动化角度来说,于是花了2/3天时间写这个部署Kubernetes脚本. 运维工作中,常常会遇到部署各种各样的服务,建议:常规部署都应该尽量使用脚本完成,一方面提高自身脚本编写能力,另一方面推进运维自动化. 详细部署说明文档:http://lizhenliang.blog.51c

使用Rancher的RKE快速部署Kubernetes集群

简要说明: 本文共涉及3台Ubuntu机器,1台RKE部署机器(192.168.3.161),2台Kubernetes集群机器(3.162和3.163). 先在Windows机器上,将rke_linux-amd64从github上下载下来,重新命名为rke ,编辑好cluster.yml集群部署文件,使用putty提供的pscp命令,将文件上传到3.161机器上.在3.161机器上,执行rke命令,将集群部署到3.162和3.163机器上. 只要环境配置正确,部署非常快,整个集群5分钟搞定. 准

部署kubernetes集群

一.部署说明最近很多学生问我kubernetes的技术问题,回答了很多次,突然间想我是不是做一些帖子,可以让我的学生能看到,身边想学习的人也都能看到呢!于是我打算开始写这边文章,如果访问量大,学习的人多,我会每周更新一篇的速度去将kubernetes的技术一直更新下去.随着云计算的发展,容器技术不断的更新,发展.从docker到kubernetes,企业也在不断的升级自己的架构,代表公司就是我们熟悉的京东,从2015年开始不断的使用容器来承载自己的业务.2018年618,60%的业务都是由kub

Kubeadm部署Kubernetes集群

Kubeadm部署Kubernetes1.14.1集群 原理kubeadm做为集群安装的"最佳实践"工具,目标是通过必要的步骤来提供一个最小可用的集群运行环境.它会启动集群的基本组件以及必要的附属组件,至于为集群提供更丰富功能(比如监控,度量)的组件,不在其安装部署的范围.在环境节点符合其基本要求的前提下,kubeadm只需要两条基本命令便可以快捷的将一套集群部署起来.这两条命令分别是: kubeadm init:初始化集群并启动master相关组件,在计划用做master的节点上执行