本地kubeadm搭建kubernetes集群

一、环境准备
(每个机器都是centos7.6)
每个机器执行:

yum install chronyd -y
systemctl start chronyd
vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.8.130 master
192.168.8.131 node01
192.168.8.132 node02
192.168.8.133 node03

systemctl disable firewalld
systemctl stop firewalld

setenforce 0 临时生效

vim /etc/selinux/config
SELINUX=disabled

永久生效但是需要重启

配置docker镜像源
访问mirrors.aliyun.com,找到docker-ce,点击linux,点击centos,右键docker-ce.repo复制链接地址

cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
--2019-05-19 17:39:51-- https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)...

其他三台机器上也执行该命令

接下来在master节点执行:
修改yum源

[[email protected] ~]# cd /etc/yum.repos.d/
[[email protected] yum.repos.d]# ls
CentOS-Base.repo CentOS-fasttrack.repo CentOS-Vault.repo epel-testing.repo
CentOS-CR.repo CentOS-Media.repo docker-ce.repo kubernetes.repo
CentOS-Debuginfo.repo CentOS-Sources.repo epel.repo
[[email protected] yum.repos.d]# vim CentOS-Base.repo
[base]
name=CentOS-$releasever - Base
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
baseurl=https://mirrors.aliyun.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#released updates
[updates]
name=CentOS-$releasever - Updates
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
baseurl=https://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/
baseurl=https://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

把base updates 和extras这三项的baseurl改成阿里的。保存退出,并发送到其他三台work

scp /etc/yum.repos.d/CentOS-Base.repo node01:/etc/yum.repos.d/
scp /etc/yum.repos.d/CentOS-Base.repo node02:/etc/yum.repos.d/
scp /etc/yum.repos.d/CentOS-Base.repo node03:/etc/yum.repos.d/

yum install docker-ce -y
systemctl enable docker
systemctl start docker

修改docker启动参数

[[email protected] ~]# vim /usr/lib/systemd/system/docker.service
[Service]
Type=notify

# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker

ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

标红项是添加的

systemctl daemon-reload
systemctl restart docker
docker info 查看一下启动项

[[email protected] ~]# iptables -vnL
Chain INPUT (policy ACCEPT 1307 packets, 335K bytes)
pkts bytes target prot opt in out source destination
2794 168K KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW / kubernetes service portals /
2794 168K KUBE-EXTERNAL-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW / kubernetes externally-visible service portals /
773K 188M KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 KUBE-FORWARD all -- 0.0.0.0/0 0.0.0.0/0 / kubernetes forwarding rules /
0 0 KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW / kubernetes service portals /
0 0 DOCKER-USER all -- 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all --
docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0

发送到三个work

scp /usr/lib/systemd/system/docker.service node01:/usr/lib/systemd/system/docker.service
scp /usr/lib/systemd/system/docker.service node02:/usr/lib/systemd/system/docker.service
scp /usr/lib/systemd/system/docker.service node03:/usr/lib/systemd/system/docker.service

查看规则

[[email protected] ~]# sysctl -a |grep bridge
net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-filter-pppoe-tagged = 0
net.bridge.bridge-nf-filter-vlan-tagged = 0
net.bridge.bridge-nf-pass-vlan-input-dev = 0
sysctl: reading key "net.ipv6.conf.all.stable_secret"
sysctl: reading key "net.ipv6.conf.default.stable_secret"
sysctl: reading key "net.ipv6.conf.docker0.stable_secret"
sysctl: reading key "net.ipv6.conf.ens33.stable_secret"
sysctl: reading key "net.ipv6.conf.lo.stable_secret"
其中标红项在不同环境的值不一样,添加配置确保他们为1
[[email protected] ~]# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
~

重读一下

[[email protected] ~]# systctl -p /etc/sysctl.d/k8s.conf

scp /etc/sysctl.d/k8s.conf node01:/etc/sysctl.d/
scp /etc/sysctl.d/k8s.conf node02:/etc/sysctl.d/
scp /etc/sysctl.d/k8s.conf node03:/etc/sysctl.d/

本地创建kubernetes.repo文件

[[email protected] ~]# cd /etc/yum.repos.d/
[[email protected] yum.repos.d]# vim kubernetes.repo
[kubernetes]
name=Kubernetes Repository
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

同样在阿里镜像网站找到kubernetes点击,点击yum,点击repos,找到kubernetes-el7-x86_64/

文件中baseurl的为kubernetes-el7-x86_64/ 的链接地址
gpgkey中的两个地址为上一级目录中doc中的两个链接地址

yum repolist检查一下
查看kube开头的包

[[email protected] yum.repos.d]# yum list all |grep "^kube"
kubeadm.x86_64 1.14.2-0 @kubernetes
kubectl.x86_64 1.14.2-0 @kubernetes
kubelet.x86_64 1.14.2-0 @kubernetes
kubernetes-cni.x86_64 0.7.5-0 @kubernetes
kubernetes.x86_64 1.5.2-0.7.git269f928.el7 extras
kubernetes-ansible.noarch 0.6.0-0.1.gitd65ebd5.el7 epel
kubernetes-client.x86_64 1.5.2-0.7.git269f928.el7 extras
kubernetes-master.x86_64 1.5.2-0.7.git269f928.el7 extras
kubernetes-node.x86_64 1.5.2-0.7.git269f928.el7 extras

安装工具

yum install -y kubeadm kubectl kubelet

修改kubelet参数(被kubeadm使用)

[[email protected] yum.repos.d]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

查看一下集群初始化默认参数

[[email protected] yum.repos.d]# kubeadm config print init-defaults
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:

  • groups:
  • system:bootstrappers:kubeadm:default-node-token
    token: abcdef.0123456789abcdef
    ttl: 24h0m0s
    usages:
  • signing
  • authentication
    kind: InitConfiguration
    localAPIEndpoint:
    advertiseAddress: 1.2.3.4
    bindPort: 6443
    nodeRegistration:
    criSocket: /var/run/dockershim.sock
    name: master
    taints:
  • effect: NoSchedule
    key: node-role.kubernetes.io/master

    apiServer:
    timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta1
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controlPlaneEndpoint: ""
    controllerManager: {}
    dns:
    type: CoreDNS
    etcd:
    local:
    dataDir: /var/lib/etcd
    imageRepository: k8s.gcr.io
    kind: ClusterConfiguration
    kubernetesVersion: v1.14.0
    networking:
    dnsDomain: cluster.local
    podSubnet: ""
    serviceSubnet: 10.96.0.0/12
    scheduler: {}

kubeadm init --pod-network-cidr="10.244.0.0/16" --ignore-preflight-errors=Swap

成功后会显示:

记录下最后一条join命令,后面加入集群会用到

查看节点

[[email protected] ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 145m v1.14.2

status为NotReady,我们需要部署网络插件

部署flannel

[[email protected] ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

[[email protected] ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-q55g7 1/1 Running 0 150m
coredns-fb8b8dccf-vk7td 1/1 Running 0 150m
etcd-master 1/1 Running 0 149m
kube-apiserver-master 1/1 Running 0 149m
kube-controller-manager-master 1/1 Running 0 149m
kube-flannel-ds-amd64-gfl77 1/1 Running 0 71s
kube-proxy-4s9f6 1/1 Running 0 150m
kube-scheduler-master 1/1 Running 0 149m

前两个有可能处于创建状态,稍等一下就好了

[[email protected] ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 152m v1.14.2

发送到其他三个work

[[email protected] ~]# scp /etc/yum.repos.d/kubernetes.repo node01:/etc/yum.repos.d/
[email protected]‘s password:
kubernetes.repo 100% 269 169.4KB/s 00:00
[[email protected] ~]# scp /etc/yum.repos.d/kubernetes.repo node02:/etc/yum.repos.d/
[email protected]‘s password:
kubernetes.repo 100% 269 277.9KB/s 00:00
[[email protected] ~]# scp /etc/yum.repos.d/kubernetes.repo node03:/etc/yum.repos.d/
[email protected]‘s password:
kubernetes.repo

下面将三台work加入集群,在node01 02 03上执行命令

[[email protected] ~]# yum install -y kubeadm kubelet

然后去master复制文件

[[email protected] ~]# scp /etc/sysconfig/kubelet node01:/etc/sysconfig/
[email protected]‘s password:
kubelet 100% 42 32.7KB/s 00:00
[[email protected] ~]# scp /etc/sysconfig/kubelet node02:/etc/sysconfig/
[email protected]‘s password:
kubelet 100% 42 32.9KB/s 00:00
[[email protected] ~]# scp /etc/sysconfig/kubelet node03:/etc/sysconfig/
[email protected]‘s password:
kubelet 100% 42 29.4KB/s 00:00
[[email protected] ~]#

先在work上拉取阿里仓库的pause镜像

[[email protected] ~]# kubeadm join 192.168.8.130:6443 --token kxmqr4.1vza1kh70vra2d2u --discovery-token-ca-cert-hash sha256:6537d556e18c1799f10ac567dcaa41ee2b3197aa4c464747bc50243a6142bc1c --ignore-preflight-errors=Swap

查看节点

[[email protected] /]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 172m v1.14.2
node01 Ready <none> 7m39s v1.14.2
node02 Ready <none> 48s v1.14.2
node03 Ready <none> 43s v1.14.2

原文地址:https://blog.51cto.com/13670314/2397626

时间: 2024-08-01 05:19:19

本地kubeadm搭建kubernetes集群的相关文章

kubeadm搭建kubernetes集群

一.环境准备首先我的三个ubuntu云主机的配置如下 cpu数量 内存 磁盘 Ubuntu 2 8G 20G 18.04LTS 而且能保证三台机器都能连接外网这里的所有命令都是在root用户下操作的二.安装 1.在所有的节点上安装Docker和kubeadm [email protected]:~# apt-get install curl -y [email protected]:~# curl -s https://packages.cloud.google.com/apt/doc/apt-

kubeadm 搭建 kubernetes 集群

https://mritd.me/2016/10/29/set-up-kubernetes-cluster-by-kubeadm/ 原文地址:https://www.cnblogs.com/ihibin/p/8505385.html

Kubernetes(K8s) 安装(使用kubeadm安装Kubernetes集群)

概述: 这篇文章是为了介绍使用kubeadm安装Kubernetes集群(可以用于生产级别).使用了Centos 7系统. PS: 篇幅有点长,但是比较详细.比较全面 一.Centos7 配置说明 1.1   Firewalld(防火墙) CentOS Linux 7 默认开起来防火墙服务(firewalld),而Kubernetes的Master与工作Node之间会有大量的网络通信,安全的做法是在防火墙上配置Kbernetes各组件(api-server.kubelet等等)需要相互通信的端口

使用kubeadm部署kubernetes集群

使用kubeadm部署kubernetes集群 通过docker,我们可以在单个主机上快速部署各个应用,但是实际的生产环境里,不会单单存在一台主机,这就需要用到docker集群管理工具了,本文将简单介绍使用docker集群管理工具kubernetes进行集群部署. 1 环境规划与准备 本次搭建使用了三台主机,其环境信息如下:| 节点功能 | 主机名 | IP || ------|:------:|-------:|| master | master |192.168.1.11 || slave1

阿里云ECS搭建Kubernetes集群踩坑记

阿里云ECS搭建Kubernetes集群踩坑记 [TOC] 1. 现有环境.资源 资源 数量 规格 EIP 1 5M带宽 ECS 3 2 vCPU 16 GB内存 100G硬盘 ECS 3 2 vCPU 16 GB内存 150G硬盘 SLB 2 私网slb.s1.small 2. 规划 坑: 上网问题,因为只有一个EIP,所有其它节点只能通过代理上网; 负载均衡问题,因为阿里不支持LVS,负载均衡TCP方式后端又不支持访问负载均衡,HTTP和HTTPS方式,只支持后端协议为HTTP; 为了避免上

rancher搭建kubernetes集群

实验环境: 利用rancher搭建kubernetes集群,及搭建和关联harbor私有镜像库. rancher:http://10.10.10.10:8888 kubernetes:10.10.10.10 harbor:10.10.10.100 images:10.10.10.100/test_nginx:latest app:nginx 实验目的: 一.实践docker:search pull.run.tag.build.push功能.了解参数含义 #docker search *image

使用Kubeadm(1.13)快速搭建Kubernetes集群

Kubeadm是管理集群生命周期的重要工具,从创建到配置再到升级,Kubeadm处理现有硬件上的生产集群的引导,并以最佳实践方式配置核心Kubernetes组件,以便为新节点提供安全而简单的连接流程并支持轻松升级.随着Kubernetes 1.13 的发布,现在Kubeadm正式成为GA. 准备 首先准备2台虚拟机(CPU最少2核),我是使用Hyper-V创建的2台Ubuntu18.04虚拟机,IP和机器名如下: 172.17.20.210 master 172.17.20.211 node1

从centos7镜像到搭建kubernetes集群(kubeadm方式安装)

在网上看了不少关于Kubernetes的视频,虽然现在还未用上,但是也是时候总结记录一下,父亲常教我的一句话:学到手的东西总有一天会有用!我也相信在将来的某一天会用到现在所学的技术.废话不多扯了.... 一.前期准备 1.k8s-master01:master主服务器(存在单点故障) 2.k8s-node01.k8s-node02:2个工作节点 3.Harbor:私有仓库(简单记录搭建Harbor私服仓库) 4.Router:软路由(由于kubeadm是存放在谷歌云的,国内无法访问,K8S集群搭

基于 CentOS 7 搭建kubernetes集群

基于Centos7构建Kubernetes平台 一.实验环境 3台centos7的主机: master  192.168.111.131部署etcd,kube-apiserver,kube-controller-manager,kube-scheduler 4个应用. node01  192.168.111.130  部署docker,kubelet, kube-proxy  3个应用 node02  192.168.111.129  部署docker,kubelet, kube-proxy