centos下kubernetes+flannel部署

一、准备工作

1. 三台centos主机

k8s(即kubernetes,下同)master: 10.16.42.200

k8s node1: 10.16.42.198

k8s node2: 10.16.42.299

2. 程序下载(百度网盘):k8s-1.0.1(也可以使用测试版k8s-1.1.2.beta),Docker-1.8.2cadvisor-0.14.0etcd-2.2.1flannel-0.5.5

二、ETCD集群部署

分别向三台主机的/etc/hosts文件中追加如下设置:

10.16.42.198 bx-42-198
10.16.42.199 bx-42-199
10.16.42.200 bx-42-200

在三台主机中分别解压etcd.tar,将其中的 etcd 和 etcdctl 复制到你的工作目录(如 /openxxs/bin,下同)。

在200的/openxxs/bin目录下创建脚本start_etcd.sh并执行:

 1 #!/bin/bash
 2
 3 etcd_token=kb2-etcd-cluster
 4 local_name=kbetcd0
 5 local_ip=10.16.42.200
 6 local_peer_port=4010
 7 local_client_port1=4011
 8 local_client_port2=4012
 9 node1_name=kbetcd1
10 node1_ip=10.16.42.198
11 node1_port=4010
12 node2_name=kbetcd2
13 node2_ip=10.16.42.199
14 node2_port=4010
15
16
17 ./etcd -name $local_name 18 -initial-advertise-peer-urls http://$local_ip:$local_peer_port \
19 -listen-peer-urls http://0.0.0.0:$local_peer_port \
20 -listen-client-urls http://0.0.0.0:$local_client_port1,http://0.0.0.0:$local_client_port2 \
21 -advertise-client-urls http://$local_ip:$local_client_port1,http://$local_ip:$local_client_port2 \
22 -initial-cluster-token $etcd_token 23 -initial-cluster $local_name=http://$local_ip:$local_peer_port,$node1_name=http://$node1_ip:$node1_port,$node2_name=http://$node2_ip:$node2_port \
24 -initial-cluster-state new

在198的/openxxs/bin目录下创建脚本start_etcd.sh并执行:

 1 #!/bin/bash
 2
 3 etcd_token=kb2-etcd-cluster
 4 local_name=kbetcd1
 5 local_ip=10.16.42.198
 6 local_peer_port=4010
 7 local_client_port1=4011
 8 local_client_port2=4012
 9 node1_name=kbetcd0
10 node1_ip=10.16.42.200
11 node1_port=4010
12 node2_name=kbetcd2
13 node2_ip=10.16.42.199
14 node2_port=4010
15
16
17 ./etcd -name $local_name 18 -initial-advertise-peer-urls http://$local_ip:$local_peer_port \
19 -listen-peer-urls http://0.0.0.0:$local_peer_port \
20 -listen-client-urls http://0.0.0.0:$local_client_port1,http://0.0.0.0:$local_client_port2 \
21 -advertise-client-urls http://$local_ip:$local_client_port1,http://$local_ip:$local_client_port2 \
22 -initial-cluster-token $etcd_token 23 -initial-cluster $local_name=http://$local_ip:$local_peer_port,$node1_name=http://$node1_ip:$node1_port,$node2_name=http://$node2_ip:$node2_port \
24 -initial-cluster-state new &

在199的/openxxs/bin目录下创建脚本start_etcd.sh并执行:

 1 #!/bin/bash
 2
 3 etcd_token=kb2-etcd-cluster
 4 local_name=kbetcd2
 5 local_ip=10.16.42.199
 6 local_peer_port=4010
 7 local_client_port1=4011
 8 local_client_port2=4012
 9 node1_name=kbetcd1
10 node1_ip=10.16.42.198
11 node1_port=4010
12 node2_name=kbetcd0
13 node2_ip=10.16.42.200
14 node2_port=4010
15
16
17 ./etcd -name $local_name 18 -initial-advertise-peer-urls http://$local_ip:$local_peer_port \
19 -listen-peer-urls http://0.0.0.0:$local_peer_port \
20 -listen-client-urls http://0.0.0.0:$local_client_port1,http://0.0.0.0:$local_client_port2 \
21 -advertise-client-urls http://$local_ip:$local_client_port1,http://$local_ip:$local_client_port2 \
22 -initial-cluster-token $etcd_token 23 -initial-cluster $local_name=http://$local_ip:$local_peer_port,$node1_name=http://$node1_ip:$node1_port,$node2_name=http://$node2_ip:$node2_port \
24 -initial-cluster-state new &

在各主机上执行类似如下命令查看etcd是否正常运行:

1 curl -L http://10.16.42.198:4012/version
2 curl -L http://10.16.42.199:4012/version
3 curl -L http://10.16.42.200:4012/version

如果返回值均为 “{"etcdserver":"2.2.1","etcdcluster":"2.2.0"}” 说明ETCD部署成功。

三、Docker安装与设置

yum install docker-engine-1.8.2-1.el7.centos.x86_64.rpm -y

各个主机上安装成功后,修改 /etc/sysconfig/docker 文件为:

OPTIONS="-g /opt/scs/docker --insecure-registry 10.11.150.76:5000"

其中的--insecure-registry表示使用自己私有的镜像仓库。

修改 /lib/systemd/system/docker.service 内容为:

 1 [Unit]
 2 Description=Docker Application Container Engine
 3 Documentation=https://docs.docker.com
 4 After=network.target docker.socket
 5 Requires=docker.socket
 6
 7 [Service]
 8 Type=notify
 9 EnvironmentFile=/etc/sysconfig/docker
10 ExecStart=/usr/bin/docker -d $OPTIONS 11           $DOCKER_STORAGE_OPTIONS 12           $DOCKER_NETWORK_OPTIONS 13           $ADD_REGISTRY 14           $BLOCK_REGISTRY 15           $INSECURE_REGISTRY
16 #ExecStart=/usr/bin/docker daemon -H fd://
17 MountFlags=slave
18 LimitNOFILE=1048576
19 LimitNPROC=1048576
20 LimitCORE=infinity
21
22 [Install]
23 WantedBy=multi-user.target

注意,k8s会托管你的docker,如果之前在主机上用docker创建或运行了一些容器,注意数据的备份。

四、Flannel安装与设置

yum localinstall flannel-0.5.5-1.fc24.x86_64.rpm

各个主机上安装成功后,修改 /etc/sysconfig/flanneld 内容为:

# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD="http://10.16.42.200:4012"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/coreos.com/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

即确定flanneld使用的etcd服务地址和etcd中存储flannel相关设置的key值。

修改 /lib/systemd/system/flanneld.service 内容为:

 1 [Unit]
 2 Description=Flanneld overlay address etcd agent
 3 After=network.target
 4 After=network-online.target
 5 Wants=network-online.target
 6 After=etcd.service
 7 Before=docker.service
 8
 9 [Service]
10 Type=notify
11 EnvironmentFile=/etc/sysconfig/flanneld
12 EnvironmentFile=-/etc/sysconfig/docker-network
13 ExecStart=/usr/bin/flanneld -etcd-endpoints=${FLANNEL_ETCD} -etcd-prefix=${FLANNEL_ETCD_KEY} $FLANNEL_OPTIONS
14 ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
15 Restart=on-failure
16
17 [Install]
18 WantedBy=multi-user.target
19 RequiredBy=docker.service

五、具体部署过程

1. 启动ETCD

ETCD是k8s正常运行的基础,因此按照第二步中的方式运行脚本并测试部署成功后再进行其它程序的启动。

2. 启动Flannel

启动Flannel前停止docker、iptables和firewall服务的运行:

systemctl stop docker
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld

使用 ps aux | grep docker 查看 docker 是不是以 daemon 的形式运行着。如果是,kill 掉该进程。

使用 ifconfig 查看是否存在 docker0 及 flannel 相关的网桥。如果有,使用 ip link delete docker0 删除。

以上准备工作做好后,还需要往ETCD中写入flannel的相关配置,即在 200 主机上创建 flannel-config.json 文件,内容为:

{
    "Network": "172.16.0.0/12",
    "SubnetLen": 16,
    "Backend": {
        "Type": "vxlan",
        "VNI": 1
    }
}

即规定了flannel可用的子网段和网络包封装方式,然后将其写入ETCD中(注意这里的key值和Flannel启动的FLANNEL_ETCD_KEY参数值保持一致):

./etcdctl --peers=http://10.16.42.200:4012 set /coreos.com/network/config < flannel-config.json

然后在各个主机上启动flannel:

systemctl start flanneld

3. 启动docker

在各个主机上启动docker服务:

systemctl start docker

然后使用ifconfig查看docker0和flannel.1的IP网段,如果flannel.1的网段包含了docker0的网段,则说明flannel的配置和启动是没问题的。

4. 启动master上的k8s服务

./kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://127.0.0.1:4012 --kubelet_port=10250 --allow_privileged=false --service-cluster-ip-range=172.16.0.0/12 --insecure-bind-address=0.0.0.0 --insecure-port=8080 &

./kube-controller-manager --logtostderr=true --v=0 --master=http://bx-42-200:8080 --cloud-provider="" &

./kube-scheduler --logtostderr=true --v=0 --master=http://bx-42-200:8080 &

注意在启动 kube-controller-manager 时可能会报如下错误:

plugins.go:71] No cloud provider specified.
controllermanager.go:290] Failed to start service controller: ServiceController should not be run without a cloudprovider.

这是由 --cloud-provider 的值为空或未指定该参数造成的,但对整体的k8s运行无太大影响,所以可以忽略(该bug参见github讨论:戳这里)。

5. 启动node上的k8s服务

./kube-proxy --logtostderr=true --v=0 --master=http://bx-42-200:8080 --proxy-mode=iptables &

./kubelet --logtostderr=true --v=0 --api_servers=http://bx-42-200:8080 --address=0.0.0.0 --allow_privileged=false --pod-infra-container-image=10.11.150.76:5000/kubernetes/pause:latest &

注意这里的 --pod-infra-container-image 参数设置。每个pod启动时都要先启动一个/kubernetes/pause:latest容器来进行一些基本的初始化工作,而默认该镜像的下载地址为 gcr.io/google_containers/pause:0.8.0 。由于GWF的存在可能会连接不上该资源,所以可以将该镜像下载下来之后再push到自己的docker本地仓库中,启动 kubelet 时从本地仓库中读取即可。

还有注意 --proxy-mode=iptables 参数是在k8s 1.1实验版本中才有的,其含义的官方解释如下:

--proxy-mode="": Which proxy mode to use: ‘userspace‘ (older, stable) or ‘iptables‘ (experimental). If blank, look at the Node object on the Kubernetes API and respect the ‘net.experimental.kubernetes.io/proxy-mode‘ annotation if provided.  Otherwise use the best-available proxy (currently userspace, but may change in future versions).  If the iptables proxy is selected, regardless of how, but the system‘s kernel or iptables versions are insufficient, this always falls back to the userspace proxy.

如果不支持 --proxy-mode=iptables 则会报类似如下错误:

W1119 21:00:12.187930    5595 server.go:200] Failed to start in resource-only container "/kube-proxy": write /sys/fs/cgroup/memory/kube-proxy/memory.swappiness: invalid argument
E1119 21:00:12.198572    5595 proxier.go:197] Error removing userspace rule: error checking rule: exit status 2: iptables v1.4.21: Couldn‘t load target `KUBE-PORTALS-HOST‘:No such file or directory

Try `iptables -h‘ or ‘iptables --help‘ for more information.
E1119 21:00:12.200286    5595 proxier.go:201] Error removing userspace rule: error checking rule: exit status 2: iptables v1.4.21: Couldn‘t load target `KUBE-PORTALS-CONTAINER‘:No such file or directory

Try `iptables -h‘ or ‘iptables --help‘ for more information.
E1119 21:00:12.202162    5595 proxier.go:207] Error removing userspace rule: error checking rule: exit status 2: iptables v1.4.21: Couldn‘t load target `KUBE-NODEPORT-HOST‘:No such file or directory

Try `iptables -h‘ or ‘iptables --help‘ for more information.
E1119 21:00:12.204058    5595 proxier.go:211] Error removing userspace rule: error checking rule: exit status 2: iptables v1.4.21: Couldn‘t load target `KUBE-NODEPORT-CONTAINER‘:No such file or directory

Try `iptables -h‘ or ‘iptables --help‘ for more information.
E1119 21:00:12.205848    5595 proxier.go:220] Error flushing userspace chain: error flushing chain "KUBE-PORTALS-CONTAINER": exit status 1: iptables: No chain/target/match by that name.
E1119 21:00:12.207467    5595 proxier.go:220] Error flushing userspace chain: error flushing chain "KUBE-PORTALS-HOST": exit status 1: iptables: No chain/target/match by that name.
E1119 21:00:12.209000    5595 proxier.go:220] Error flushing userspace chain: error flushing chain "KUBE-NODEPORT-HOST": exit status 1: iptables: No chain/target/match by that name.
E1119 21:00:12.210580    5595 proxier.go:220] Error flushing userspace chain: error flushing chain "KUBE-NODEPORT-CONTAINER": exit status 1: iptables: No chain/target/match by that name.

六、测试

以上部署完成之后,在任意主机上执行以下命令查看结点状态:

./kubectl -s 10.16.42.200:8080 get nodes

如果返回类似如下内容则说明apiserver是正常服务的:

NAME        LABELS                             STATUS    AGE
bx-42-198   kubernetes.io/hostname=bx-42-198   Ready     1d
bx-42-199   kubernetes.io/hostname=bx-42-199   Ready     1d

创建 test.yaml 文件,内容如下:

 1 apiVersion: v1
 2 kind: ReplicationController
 3 metadata:
 4     name: test-1
 5 spec:
 6   replicas: 1
 7   template:
 8     metadata:
 9       labels:
10         app: test-1
11     spec:
12       containers:
13         - name: iperf
14           image: 10.11.150.76:5000/openxxs/iperf:1.2
15       nodeSelector:
16         kubernetes.io/hostname: bx-42-198
17 ---
18 apiVersion: v1
19 kind: ReplicationController
20 metadata:
21     name: test-2
22 spec:
23   replicas: 1
24   template:
25     metadata:
26       labels:
27         app: test-2
28     spec:
29       containers:
30         - name: iperf
31           image: 10.11.150.76:5000/openxxs/iperf:1.2
32       nodeSelector:
33         kubernetes.io/hostname: bx-42-198
34 ---
35 apiVersion: v1
36 kind: ReplicationController
37 metadata:
38     name: test-3
39 spec:
40   replicas: 1
41   template:
42     metadata:
43       labels:
44         app: test-3
45     spec:
46       containers:
47         - name: iperf
48           image: 10.11.150.76:5000/openxxs/iperf:1.2
49       nodeSelector:
50         kubernetes.io/hostname: bx-42-199
51 ---
52 apiVersion: v1
53 kind: ReplicationController
54 metadata:
55     name: test-4
56 spec:
57   replicas: 1
58   template:
59     metadata:
60       labels:
61         app: test-4
62     spec:
63       containers:
64         - name: iperf
65           image: 10.11.150.76:5000/openxxs/iperf:1.2
66       nodeSelector:
67         kubernetes.io/hostname: bx-42-199

表示在198上创建 test-1 和 test-2 两个pod,在199上创建 test-3 和 test-4 两个pod。注意其中的 image 等参数根据实际情况进行修改。

通过test.yaml创建pods:

./kubectl -s 10.16.42.200:8080 create -f test.yaml

通过 get pods 查看pods的创建和运行状态:

./kubectl -s 10.16.42.200:8080 get pods

如果创建成功并正常运行则会显示类似如下内容:

NAME                    READY     STATUS    RESTARTS   AGE
test-1-a9dn3            1/1       Running   0          1d
test-2-64urt            1/1       Running   0          1d
test-3-edt2l            1/1       Running   0          1d
test-4-l6egg            1/1       Running   0          1d

在198上通过 docker exec 进入test-2对应的容器,通过 ip addr show 查看IP;同样在199上进入test-4对应的容器查看IP。然后在 test-2和 test-4 容器中互相ping 对方的IP,如果ping通了,说明flannel也正常工作了。

    

时间: 2024-11-08 03:41:21

centos下kubernetes+flannel部署的相关文章

Centos下Kubernetes+Flannel部署(新)

一.准备工作 1) 三台centos主机 k8s master: 10.11.151.97  tc-151-97 k8s node1: 10.11.151.100  tc-151-100 k8s node2: 10.11.151.101  tc-151-101 2)程序下载(百度网盘) k8s-1.1.3,Docker-1.8.2,ETCD-2.2.1,Flannel-0.5.5 二.ETCD集群部署 ETCD是k8s集群的基础,可以单结点也可以以集群的方式部署.本文以三台主机组成ETCD集群进

转载:分布式文件系统 - FastDFS 在 CentOS 下配置安装部署(2)

原文:http://blog.mayongfa.cn/193.html 一.安装 Nginx 和 fastdfs-nginx-module 安装 Nginx 请看:从零开始学 Java - CentOS 下安装 Nginx,其实我只想放这一句话.但想想我还是一步一步写详细吧. 1.下载 Nginx 和 fastdfs-nginx-module ,这里是通过wget下载(我喜欢这种方式). wget -c https://nginx.org/download/nginx-1.10.1.tar.gz

Centos下使用Docker部署asp.net core项目

本文讲述 CentOS 系统 Docker 中部署 asp.net core开源项目 abp 的过程 步骤 1. 拉取 asp.net core 基础镜像 docker pull microsoft/aspnetcore 2.编写 Dockerfile 文件 FROM docker.io/microsoft/aspnetcore WORKDIR /app COPY . . ENTRYPOINT ["dotnet", "QXD.BBS.Web.Mvc.dll"] 3.

CentOS下Kubernetes集群架设(一)主机环境预设

虽然有Rancher OS和CoreOS这类的发行版,但Kubernetes集群的安装也不是太麻烦,因此,还是先从最基本的实验下.以下是本人CentOS7.6上安装Kubernetes集群的笔记,并且分章节记录,持续更新.... Kubernetes主机环境预设 Kubernete集群的主机生产环境也有多种选择,如下: 方案一: 三台或者五台 Master 节点,分别安装全角色:ETCD , Control Plane :其他节点为容器计算机节点,分别安装角色: worker: 方案二: 三台节

CentOS下Zabbix安装部署及汉化

搭建环境:Centos6.5_x86_64,Zabbix2.4.5,epel 源 服务端: 1.安装开发软件包yum -y groupinstall "Development Tools" 2.安装所需的依赖包yum -y install httpd mysql mysql-server mysql-devel php php-mysql php-commonphp-mbstring php-gd php-odbc php-pear curl curl-devel net-snmp n

redis 在centos下的安装部署

安装的redis版本是 redis-3.0.2 请严格按照以下步骤进行 可以免除以下错误 1 make[2]: cc: Command not found 异常原因:没有安装gcc 解决方案:yum install gcc-c++ 2 redis 服务不支持 chkconfig 异常原因 /etc/init.d/redis 没有添加chkconfig的代码 =====正式开始===== 1 redis 依赖环境 gcc yum install gcc -y tcl yum install tcl

centos下svnadmin的部署过程

1.    安装SVN #yum –y install subversion 2.    安装openjdk #yum –y list java* #yum –y install java-1.8.0-openjdk* 3.    安装tomcat #cd /data #tar xf apache-tomcat-7.0.54.tar.gz #mv apache-tomcat-7.0.54 tomcat 4.    上传svnadmin包,解压,替换war包 #unzip svnadmin-3.0

[转帖]CentOS 7 使用kubeadm 部署 Kubernetes

CentOS 7 使用kubeadm 部署 Kubernetes 关闭swap 执行swapoff临时关闭swap. 重启后会失效,若要永久关闭,可以编辑/etc/fstab文件,将其中swap分区一行注释掉 #/dev/mapper/centos-swap swap swap defaults 0 0 安装配置docker 可以参考官方安装文档 1. 安装docker $ yum install yum-utils device-mapper-persistent-data lvm2 $ yu

CentOS 6.3下CHEF批量部署APACHE

之前的博文我介绍了如何搭建CHEF环境以及创建编写cookbook,resipes用来批量将cookbook下发到客户端执行相应的部署操作. NOW,本篇文档我们会详细介绍如何利用CHEF独有的框架语言来批量部署安装APACHE,并加载其HTTPS模块等功能. 相信如果你看了本篇文档,利用CHEF实现一个批量自动化部署将不是什么难事. CHEF环境部署详见: http://showerlee.blog.51cto.com/2047005/1408467 操作系统:CentOS-6.3-x86-6