Ubuntu16.04搭建kubernetes v1.11.2集群

1、节点介绍

              master        cluster-1       cluster-2       cluster-3

  hostname        k8s-55        k8s-54          k8s-53          k8s-52

     ip               10.2.49.55    10.2.49.54     10.2.49.53  10.2.49.52

2、配置网络,配置/etc/hosts     略过。。。。

3、安装kubernets

1 sudo apt-get update && apt-get install -y apt-transport-https
2 sudo curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
3 sudo cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
4   deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
5   EOF
6 sudo apt-get update

  查看源中的软件版本  sudo apt-cache madison kubelet

  一般不安装最新的,也不会安装最老的,我们先安装1.11.2的

1 sudo apt install kubelet=1.11.2-00 kubeadm=1.11.2-00 kubectl=1.11.2-00

  至此kubernetes的二进制部分安装成功

3、安装docker

  由于是用Kubernetes管理docker ,docker的版本要兼容kubernetes,去网站找兼容性列表,网站https://github.com/kubernetes/kubernetes,查看安装的是哪个版本,就看哪个版本的changlog,本文中安装的是1.11.2版本,

  从这里可以看出来兼容最高版本的docker是17.03.x,docker版本尽量装新版本

  安装docker

1 sudo apt-get update
2 sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
3 curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
4 sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
5 sudo apt-get -y update

  查看docker版本

1 sudo apt install docker-ce=17.03.3~ce-0~ubuntu-xenial
2 sudo systemctl enable docker

如果需要配置加速器,请编辑/etc/systemd/system/multi-user.target.wants/docker.service文件

4、拉取kubernetes初始化镜像

  查看初始镜像要求

1 kubeadm config images list

  由于国内无法直接拉取google镜像,可以使用别人的镜像,也可以自己通过阿里云等制作。本文使用anjia的镜像源,给出脚本

 1 #!/bin/bash
 2 KUBE_VERSION=v1.11.3
 3 KUBE_PAUSE_VERSION=3.1
 4 ETCD_VERSION=3.2.18
 5 DNS_VERSION=1.1.3
 6 username=anjia0532
 7
 8 images="google-containers.kube-proxy-amd64:${KUBE_VERSION}
 9 google-containers.kube-scheduler-amd64:${KUBE_VERSION}
10 google-containers.kube-controller-manager-amd64:${KUBE_VERSION}
11 google-containers.kube-apiserver-amd64:${KUBE_VERSION}
12 pause:${KUBE_PAUSE_VERSION}
13 etcd-amd64:${ETCD_VERSION}
14 coredns:${DNS_VERSION}
15 "
16
17 for image in $images
18 do
19     docker pull ${username}/${image}
20     docker tag ${username}/${image} k8s.gcr.io/${image}
21     #docker tag ${username}/${image} gcr.io/google_containers/${image}
22     docker rmi ${username}/${image}
23 done
24
25 unset ARCH version images username
26
27 docker tag  k8s.gcr.io/google-containers.kube-apiserver-amd64:${KUBE_VERSION}   k8s.gcr.io/kube-apiserver-amd64:${KUBE_VERSION}
28 docker rmi k8s.gcr.io/google-containers.kube-apiserver-amd64:${KUBE_VERSION}
29 docker tag  k8s.gcr.io/google-containers.kube-controller-manager-amd64:${KUBE_VERSION}  k8s.gcr.io/kube-controller-manager-amd64:${KUBE_VERSION}
30 docker rmi k8s.gcr.io/google-containers.kube-controller-manager-amd64:${KUBE_VERSION}
31 docker tag  k8s.gcr.io/google-containers.kube-scheduler-amd64:${KUBE_VERSION}  k8s.gcr.io/kube-scheduler-amd64:${KUBE_VERSION}
32 docker rmi k8s.gcr.io/google-containers.kube-scheduler-amd64:${KUBE_VERSION}
33 docker tag k8s.gcr.io/google-containers.kube-proxy-amd64:${KUBE_VERSION}   k8s.gcr.io/kube-proxy-amd64:${KUBE_VERSION}
34 docker rmi k8s.gcr.io/google-containers.kube-proxy-amd64:${KUBE_VERSION}

  执行sh pull.sh,会自动将所需镜像拉取

5、初始化Kubernetes

sudo kubeadm init   --kubernetes-version=v1.11.3   --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address 10.2.49.55
--kubernetes-version 用来指定版本
--pod-network-cidr 用于后期采用flannel作为网络组建而准备
--apiserver-advertise-address  如果机器上只有单个网卡,可以不进行指定

初始化成功的结果
 1 [init] Using Kubernetes version: vX.Y.Z
 2 [preflight] Running pre-flight checks
 3 [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
 4 [certificates] Generated ca certificate and key.
 5 [certificates] Generated apiserver certificate and key.
 6 [certificates] apiserver serving cert is signed for DNS names [kubeadm-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4]
 7 [certificates] Generated apiserver-kubelet-client certificate and key.
 8 [certificates] Generated sa key and public key.
 9 [certificates] Generated front-proxy-ca certificate and key.
10 [certificates] Generated front-proxy-client certificate and key.
11 [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
12 [kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
13 [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
14 [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
15 [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
16 [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
17 [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
18 [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
19 [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
20 [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
21 [init] This often takes around a minute; or longer if the control plane images have to be pulled.
22 [apiclient] All control plane components are healthy after 39.511972 seconds
23 [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
24 [markmaster] Will mark node master as master by adding a label and a taint
25 [markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master=""
26 [bootstraptoken] Using token: <token>
27 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
28 [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
29 [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
30 [addons] Applied essential addon: CoreDNS
31 [addons] Applied essential addon: kube-proxy
32
33 Your Kubernetes master has initialized successfully!
34
35 To start using your cluster, you need to run (as a regular user):
36
37   mkdir -p $HOME/.kube
38   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
39   sudo chown $(id -u):$(id -g) $HOME/.kube/config
40
41 You should now deploy a pod network to the cluster.
42 Run "kubectl apply -f [podnetwork].yaml" with one of the addon options listed at:
43   http://kubernetes.io/docs/admin/addons/
44
45 You can now join any number of machines by running the following on each node
46 as root:
47
48   kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
1 mkdir -p $HOME/.kube
2 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3 sudo chown $(id -u):$(id -g) $HOME/.kube/config

6、安装flannel网络组件

1 wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
2
3 kubectl apply -f kube-flannel.yml

查看kube-flannel.yml,使用的镜像是quay.io/coreos/flannel:v0.10.0-amd64

如果一直卡顿,可以自行下载

安装组建结束后正常情况下执行,一般都是正常,如果有错误,那就是某些镜像没有下载成功

7、安装客户端

  加载内核模块

1 modprobe ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh

加入kubenetes集群,执行的是kubeadm初始化最后显示的token部分

1 sudo kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

mster节点执行kubectl get node

ready状态是正常的,如果不正常,一般是某些镜像没有下载成功,一般是pause镜像比较难下载,可以采用前文pull.sh方法进行下载

mster节点执行kubectl get pod --all-namespaces -o wide

如上图,为集群创建成功,并能够正常运行。

创建CA证书等下次分享。

原文地址:https://www.cnblogs.com/sumoning/p/9718854.html

时间: 2024-10-12 21:00:05

Ubuntu16.04搭建kubernetes v1.11.2集群的相关文章

使用kubeadm快速部署Kubernetes(v1.12.1)集群---来源:马哥教育马哥原创

使用kubeadm快速部署Kubernetes(v1.12.1)集群------来源:马哥教育马哥原创 Kubernetes技术已经成为了原生云技术的事实标准,它是目前基础软件领域最为热门的分布式调度和管理平台.于是,Kubernetes也几乎成了时下开发工程师和运维工程师必备的技能之一. 一.主机环境预设 1.测试环境说明 测试使用的Kubernetes集群可由一个master主机及一个以上(建议至少两个)node主机组成,这些主机可以是物理服务器,也可以运行于vmware.virtualbo

kubeadm创建高可用kubernetes v1.12.0集群

节点规划 主机名 IP Role k8s-master01 10.3.1.20 etcd.Master.Node.keepalived k8s-master02 10.3.1.21 etcd.Master.Node.keepalived k8s-master03 10.3.1.25 etcd.Master.Node.keepalived VIP 10.3.1.29 None 版本信息: OS::Ubuntu 16.04 Docker:17.03.2-ce k8s:v1.12 来自官网的高可用架构

纯手工搭建kubernetes(k8s)1.9集群 - (二)核心模块部署

1. 部署ETCD(主节点) 1.1 简介 ??kubernetes需要存储很多东西,像它本身的节点信息,组件信息,还有通过kubernetes运行的pod,deployment,service等等.都需要持久化.etcd就是它的数据中心.生产环境中为了保证数据中心的高可用和数据的一致性,一般会部署最少三个节点.我们这里以学习为主就只在主节点部署一个实例. 如果你的环境已经有了etcd服务(不管是单点还是集群),可以忽略这一步.前提是你在生成配置的时候填写了自己的etcd endpoint哦~

kubernetes V1.10.4 集群部署 (手动生成证书)

说明:本文档涉及docker镜像,yaml文件下载地址 链接:https://pan.baidu.com/s/1QuVelCG43_VbHiOs04R3-Q 密码:70q2 本文只是作为一个安装记录 1. 环境 1.1 服务器信息 主机名 IP地址 os 版本 节点 k8s01 172.16.50.131 CentOS Linux release 7.4.1708 (Core) master k8s02 172.16.50.132 CentOS Linux release 7.4.1708 (C

Ubuntu16.04搭建MongoDB3.4.3 副本集 开启认证模式

上一篇文档讲述的是如何搭建mongodb副本集 这一篇讲述启用auth 一.为了方便启动,我们在Mongodb目录下新建conf目录,用于保存mongodb的配置文件 #进入mongod目录 cd mongodb/ #创建conf目录 mkdir conf #新增mongodb.conf vim mongodb.conf写入如下内容---------------------------------------: #日志文件位置 logpath=/data/bigdata/software/mon

ubuntu16.04安装Storm数据流实时处理系统 集群

[email protected]:~# wget http://mirror.bit.edu.cn/apache/storm/apache-storm-1.1.1/apache-storm-1.1.1.tar.gz [email protected]:/usr/local/apache-storm-1.1.1# vim conf/storm.yaml storm.zookeeper.servers: - "master" - "slave1" - "sl

kubeadm安装kubernetes v1.11.3 HA多主高可用并启用ipvs

环境介绍: 系统版本:CentOS 7.5 内核:4.18.7-1.el7.elrepo.x86_64 Kubernetes: v1.11.3 Docker-ce: 18.06 Keepalived保证apiserever服务器的IP高可用 Haproxy实现apiserver的负载均衡 master x3 && etcd x3 保证k8s集群可用性 192.168.1.1 master 192.168.1.2 master2 192.168.1.3 master3 192.168.1.4

Ubuntu16.04搭建LAMP开发环境

Ubuntu16.04搭建LAMP开发环境 虚拟机上安装好Ubuntu16.04后,是一台空白的Ubuntu.我的目的是搭建LAMP环境,顺便搭一个Python Django环境. 基本设置 1.配置网络环境 管理员给分配了一个静态IP,所以还需要进一步配置网络环境 配置DNS:右上角网络连接->编辑链接->有线连接1->IPv4设置->DNS服务器:202.112.80.106->保存 登陆网关:对于校园网用户来说,登陆网关才能访问外网 测试: ping www.baidu

Ubuntu 12.04下Hadoop 2.2.0 集群搭建(原创)

现在大家可以跟我一起来实现Ubuntu 12.04下Hadoop 2.2.0 集群搭建,在这里我使用了两台服务器,一台作为master即namenode主机,另一台作为slave即datanode主机,增加更多的slave只需重复slave部分的内容即可. 系统版本: master:Ubuntu 12.04 slave:Ubuntu 12.04 hadoop:hadoop 2.2.0 安装ssh服务:sudo apt-get install ssh 有时也要更新一下vim:sudo apt-ge