k8s实验环境的快速搭建

大部分搜索了网上的文章,有些自己遇到的问题也标记了处理方法!

Kubeadm 安装k8s
准备环境:
1.配置好各节点hosts文件
2.关闭系统防火墙
3.关闭SElinux
4.关闭swap
5.配置系统内核参数使流过网桥的流量也进入iptables/netfilter框架中,在/etc/sysctl.conf中添加以下配置:
net.bridge.bridge-nf-call-iptables?=?1
net.bridge.bridge-nf-call-ip6tables?=?1
sysctl?-p
使用kubeadm安装:
1.首先配置阿里K8S YUM源?
cat?<<EOF?>?/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
yum?-y?install?epel-release
yum?clean?all
yum?makecache

2.安装kubeadm和相关工具包
yum?-y?install?docker?kubelet?kubeadm?kubectl?kubernetes-cni
?
3.启动Docker与kubelet服务

systemctl?enable?docker?&&?systemctl?start?docker
systemctl?enable?kubelet?&&?systemctl?start?kubelet

提示:此时kubelet的服务运行状态是异常的,因为缺少主配置文件kubelet.conf。但可以暂不处理,因为在完成Master节点的初始化后才会生成这个配置文件。

4.下载K8S相关镜像
因为无法直接访问gcr.io下载镜像,所以需要配置一个国内的容器镜像加速器
配置一个阿里云的加速器:
登录?https://cr.console.aliyun.com/
在页面中找到并点击镜像加速按钮,即可看到属于自己的专属加速链接,选择Centos版本后即可看到配置方法。
?提示:在阿里云上使用 Docker 并配置阿里云镜像加速器,可能会遇到 daemon.json 导致 docker daemon 无法启动的问题,可以通过以下方法解决。

5.下载K8S相关镜像
OK,解决完加速器的问题之后,开始下载k8s相关镜像,下载后将镜像名改为k8s.gcr.io/开头的名字,以便kubeadm识别使用。

#!/bin/bash

images=(kube-proxy-amd64:v1.10.0?kube-scheduler-amd64:v1.10.0?kube-controller-manager-amd64:v1.10.0?kube-apiserver-amd64:v1.10.0br/>etcd-amd64:3.1.12?pause-amd64:3.1?kubernetes-dashboard-amd64:v1.8.3?k8s-dns-sidecar-amd64:1.14.8?k8s-dns-kube-dns-amd64:1.14.8
k8s-dns-dnsmasq-nanny-amd64:1.14.8)
for?imageName?in?${images[@]}?;?do
??docker?pull?keveon/$imageName
??docker?tag?keveon/$imageName?k8s.gcr.io/$imageName
??docker?rmi?keveon/$imageName
done
上面的shell脚本主要做了3件事,下载各种需要用到的容器镜像、重新打标记为符合k8s命令规范的版本名称、清除旧的容器镜像。
提示:镜像版本一定要和kubeadm安装的版本一致,否则会出现time out问题。

6.初始化安装K8S Master
执行上述shell脚本,等待下载完成后,执行kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.224.0.0/16
提示:选项–kubernetes-version=v1.10.0是必须的,否则会因为访问google网站被墙而无法执行命令。这里使用v1.10.0版本,刚才前面也说到了下载的容器镜像版本必须与K8S版本一致否则会出现time out。
上面的命令大约需要1分钟的过程,期间可以观察下tail -f /var/log/message日志文件的输出,掌握该配置过程和进度。上面最后一段的输出信息保存一份,后续添加工作节点还要用到。

7.配置kubectl认证信息

#?对于非root用户
mkdir?-p?$HOME/.kube
sudo?cp?-i?/etc/kubernetes/admin.conf?$HOME/.kube/config
sudo?chown?$(id?-u):$(id?-g)?$HOME/.kube/config
#?对于root用户
export?KUBECONFIG=/etc/kubernetes/admin.conf
也可以直接放到~/.bash_profile
echo?"export?KUBECONFIG=/etc/kubernetes/admin.conf"?>>?~/.bash_profile
8.安装flannel网络
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
9.让node1、node2加入集群
提示:细心的童鞋应该会发现,这段命令其实就是前面K8S Matser安装成功后我让你们保存的那段命令。
?默认情况下,Master节点不参与工作负载,但如果希望安装出一个All-In-One的k8s环境,则可以执行以下命令,让Master节点也成为一个Node节点:

kubectl?taint?nodes?--all?node-role.kubernetes.io/master-

10.验证K8S Master是否搭建成功
#?查看节点状态
kubectl?get?nodes
#?查看pods状态
kubectl?get?pods?--all-namespaces
#?查看K8S集群状态
kubectl?get?cs

初始化集群

[[email protected] ~]# kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.224.0.0/16
[init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Service-Docker]: docker service is not enabled, please run ‘systemctl enable docker.service‘
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [vm-for-lhz-test-191 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.100.191]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [vm-for-lhz-test-191] and IPs [192.168.100.191]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 30.504825 seconds
[uploadconfig]?Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node vm-for-lhz-test-191 as master by adding a label and a taint
[markmaster] Master vm-for-lhz-test-191 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: nz32q6.6hgq3hhrmokdprnr
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

You can now join any number of machines by running the following on each node
as root:

如果忘记token可以在master节点上创建新的token kubeadm token create
[[email protected] ~]# kubeadm join 192.168.100.191:6443 --token b3cr72.33pjt6evrwtxrgsh --discovery-token-unsafe-skip-ca-verification
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.100.191:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.100.191:6443"
[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.100.191:6443"
[discovery] Successfully established connection with API Server "192.168.100.191:6443"

This node has joined the cluster:

  • Certificate signing request was sent to master and a response
    was received.
  • The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes‘ on the master to see this node join the cluster.

[[email protected] ~]# kubectl get nodes --all-namespaces=true
NAMESPACE NAME STATUS ROLES AGE VERSION
vm-for-lhz-test-191 Ready master 42m v1.10.0
vm-for-lhz-test-192 Ready <none> 3m v1.10.0
vm-for-lhz-test-193 Ready <none> 11m v1.10.0
vm-for-lhz-test-194 Ready <none> 3m v1.10.0

错误问题 需要更改docker的cgroups 为systemd的驱动
Apr 14 18:16:40 vm-for-lhz-test-193 kubelet: F0414 18:16:40.481654 18286 server.go:233] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

vi /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --exec-opt native.cgroupdriver=systemd

原文地址:http://blog.51cto.com/andylhz2009/2108089

时间: 2024-10-10 17:01:00

k8s实验环境的快速搭建的相关文章

Centos系统下Lamp环境的快速搭建(超详细,转)

lamp的搭建对于初学者是一件很头疼的事情,所以借此机会把自己当初快速搭建linux+apche+mysql+php的方法分享大家希望能到你. 工具/原料 虚拟机及Centos操作系统 Linux基本命令的使用 方法/步骤 首先为了搭建一个稳定的lamp的练习环境,确保你的虚拟机可以连网,这里我们使用的yum安装,它可以帮助我们解决软件自己的依赖关系.我还在后面加了postgresql数据库如果不需要的话可以去掉和postgresql的参数.命令如下 yum -y install httpd m

2-1-搭建Linux实验环境-sshd服务搭建与管理与防治暴力破解-课堂笔记

1.学习Linux服务前期环境准备.搭建一个RHEL6环境 注意:本章学习推荐大家用centos6.X 系列的系统,用RHEL也可以 实验环境搭建: 系统安装 安装RHEL6或者centos 6系列 64位系统 不要用32位 CENTOS6X86_64 从6.5 -6.8 都可以 下载地址:http://pan.baidu.com/s/1o7DxkQu 密码: puny 1)清空iptables [[email protected] ~]# iptables -F[[email protecte

Grails开发环境的快速搭建

1 JAVA环境变量的设置和Grails环境变量设置 个人参考 JAVA_HOME =E:\kaifa\Java\jdk7_32 GRAILS_HOME =E:\kaifa\grails-2.3.11 PATH  =%JAVA_HOME%\bin;%MAVEN_HOME%\bin; 验证配置是否正确,打开命令窗口(cmd) 2环境配置好以后可以创建一个Grails的HelloWorld. 创建一个工作空间(我是在e盘创建了一个grails_test) 通过命令行进入到 grails_test 创

ubuntu环境下快速搭建开发环境

接触ubuntu已经半年了,虽然游戏啊qq啊在linux下配置稍微麻烦一些,但是作为开发环境,ubuntu真的是好东西,无论是c啊还是php and etc 看到官网上文档开发环境建议wamp,如果是linux环境新,就建议使用lamp,而且配置更加的简单.当然也有人使用lnmp. ubuntu的安装很简单,不会的自己去百度. ubuntu的服务器版本虽然默认集成了开发环境,但是有个问题就是字符界面,所以建议不要安装ubuntu官网上的服务器版本,服务器版本不过是没有安装图形界面,并且集成了服务

Kind---也很方便的微型k8s实验环境

k3s弄好之后,发现使用client-go时,不是很方便,老是提示证书验证书127.0.0.1才可以,其它的IP都有问题. 我是要在k8s上跑二次开发的,就不再细细研究,然后,再考察kind方案吧. 这个kind只依赖于一个docker镜像,感觉还可以接受,关键是,我的websocket可以在这上面跑起来呢. 大约步骤如下: 一,获取kind命令. go get -u sigs.k8s.io/kind 二,先获取好指定镜像,胜于漫长的命令行等待. docker pull kindest/node

使用vagrant快速搭建linux实验环境

简介 本文主要介绍如何使用vagrant配合virtualbox快速搭建实验环境.virtualbox是一个开源跨平台虚拟机管理软件,功能类似收费的vmwarevagrant是一个开源的虚拟机配置编排软件,可以在命令行快速启动管理虚拟机. 相关资源的百度云下载链接链接:https://pan.baidu.com/s/1nt_b96SEOIIWl2gIrabPpg 密码:6c3d 安装 1.安装virtualbox 官方下载virtualbox对应平台的软件包安装 2.安装vagrant 官方下载

[kubernetes] 使用 Minikube 快速搭建本地 k8s 环境 (基于 Docker 驱动模式)

一.实验环境 操作系统:Centos 7 x86_64 Docker:1.12.6 二.部署 k8s 步骤 2.1  安装 kubectl cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgch

如何在VMware Workstation搭建vCAC 6.2实验环境 - 第二部分

FreeNAS安装 FreeNAS将用来给ESXi主机提供虚拟的NFS存储,这样ESXi主机就可以实现vMotion.HA等高级功能了.我分配了1GB内存.4 vCPU.2GB系统盘给FreeNAS,虚拟机DNS名:FreeNAS.contoso.com. 在VMware Workstation中创建FreeNAS虚拟主机.在创建向导中记得选择FreeNAS的ISO文件. 开机等待到FreeNAS安装画面,选择Install/Upgrade. 一路默认直到安装开始.等待安装完毕出现成功信息. 点

基于 Docker 快速构建 Linux 0.11 实验环境

by Falcon of TinyLab.org 2015/05/02 简介 五分钟内搭建 Linux 0.11 的实验环境介绍了如何快速构建一个 Linux 0.11 实验环境. 本文介绍如何快速构建一个独立于宿主机的 Linux 0.11 实验环境,该实验环境可以用于任何操作系统的宿主开发机,将非常方便各类学生学习 Linux 0.11,本文只介绍 Ubuntu.在 Windows 和 Mac 下可以用 VirtualBox + Boot2Docker 来启动. 下文要求已经安装 git 和