kubernetes多节点部署解析

注:以下操作均基于centos7系统。

安装ansible

ansilbe可以通过yum或者pip安装,由于kubernetes-ansible用到了密码,故而还需要安装sshpass:

pip install ansible
wget http://sourceforge.net/projects/sshpass/files/latest/download
tar zxvf download
cd sshpass-1.05
./configure && make && make install

配置kubernetes-ansible

# git clone https://github.com/eparis/kubernetes-ansible.git
# cd kubernetes-ansible

# #在group_vars/all.yml中配置用户为root
# cat group_vars/all.yml | grep ssh
ansible_ssh_user: root

# # Each kubernetes service gets its own IP address. These are not real IPs.
# # You need only select a range of IPs which are not in use elsewhere in your
# # environment. This must be done even if you do not use the network setup
# # provided by the ansible scripts.
# cat group_vars/all.yml | grep kube_service_addresses
kube_service_addresses: 10.254.0.0/16

# #配置root密码
# echo "password" > ~/rootpassword

配置master、etcd和minion的IP地址:

# cat inventory
[masters]
192.168.0.7

[etcd]
192.168.0.7

[minions]
# kube_ip_addr为该minion上Pods的地址池,默认为/24掩码
192.168.0.3  kube_ip_addr=10.0.1.1
192.168.0.6  kube_ip_addr=10.0.2.1

测试各机器连接并配置ssh key:

# ansible-playbook -i inventory ping.yml #这个命令会输出一些错误信息,可忽略
# ansible-playbook -i inventory keys.yml

目前kubernetes-ansible对依赖处理的还不是很全面,需要先手动配置下:

# # 安装iptables
# ansible all -i inventory --vault-password-file=~/rootpassword -a ‘yum -y install iptables-services‘
# # 为CentOS 7添加kubernetes源
# ansible all -i inventory --vault-password-file=~/rootpassword -a ‘curl https://copr.fedoraproject.org/coprs/eparis/kubernetes-epel-7/repo/epel-7/eparis-kubernetes-epel-7-epel-7.repo -o /etc/yum.repos.d/eparis-kubernetes-epel-7-epel-7.repo‘
# # 配置ssh,防止ssh连接超时
# sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/ssh_config
# ansible all -i inventory --vault-password-file=~/rootpassword -a ‘sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/ssh_config‘
# ansible all -i inventory --vault-password-file=~/rootpassword -a ‘sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/sshd_config‘
# ansible all -i inventory --vault-password-file=~/rootpassword -a ‘systemctl restart sshd‘

配置docker网络,实际上就是创建kbr0网桥、为网桥配置ip并配置路由:

# ansible-playbook -i inventory hack-network.yml  

PLAY [minions] **************************************************************** 

GATHERING FACTS ***************************************************************
ok: [192.168.0.6]
ok: [192.168.0.3]

TASK: [network-hack-bridge | Create kubernetes bridge interface] **************
changed: [192.168.0.3]
changed: [192.168.0.6]

TASK: [network-hack-bridge | Configure docker to use the bridge inferface] ****
changed: [192.168.0.6]
changed: [192.168.0.3]

PLAY [minions] **************************************************************** 

GATHERING FACTS ***************************************************************
ok: [192.168.0.6]
ok: [192.168.0.3]

TASK: [network-hack-routes | stat path=/etc/sysconfig/network-scripts/ifcfg-{{ ansible_default_ipv4.interface }}] ***
ok: [192.168.0.6]
ok: [192.168.0.3]

TASK: [network-hack-routes | Set up a network config file] ********************
skipping: [192.168.0.3]
skipping: [192.168.0.6]

TASK: [network-hack-routes | Set up a static routing table] *******************
changed: [192.168.0.3]
changed: [192.168.0.6]

NOTIFIED: [network-hack-routes | apply changes] *******************************
changed: [192.168.0.6]
changed: [192.168.0.3]

NOTIFIED: [network-hack-routes | upload script] *******************************
changed: [192.168.0.6]
changed: [192.168.0.3]

NOTIFIED: [network-hack-routes | run script] **********************************
changed: [192.168.0.3]
changed: [192.168.0.6]

NOTIFIED: [network-hack-routes | remove script] *******************************
changed: [192.168.0.3]
changed: [192.168.0.6]

PLAY RECAP ********************************************************************
192.168.0.3                : ok=10   changed=7    unreachable=0    failed=0
192.168.0.6                : ok=10   changed=7    unreachable=0    failed=0

最后,在所有节点安装并配置kubernetes:

ansible-playbook -i inventory setup.yml

执行完成后可以看到kube相关的服务都在运行了:

# # 服务运行状态
# ansible all -i inventory -k -a ‘bash -c "systemctl | grep -i kube"‘
SSH password:
192.168.0.3 | success | rc=0 >>
kube-proxy.service                                                                                     loaded active running   Kubernetes Kube-Proxy Server
kubelet.service                                                                                        loaded active running   Kubernetes Kubelet Server

192.168.0.7 | success | rc=0 >>
kube-apiserver.service                                                      loaded active running   Kubernetes API Server
kube-controller-manager.service                                             loaded active running   Kubernetes Controller Manager
kube-scheduler.service                                                      loaded active running   Kubernetes Scheduler Plugin

192.168.0.6 | success | rc=0 >>
kube-proxy.service                                                                                     loaded active running   Kubernetes Kube-Proxy Server
kubelet.service                                                                                        loaded active running   Kubernetes Kubelet Server

# # 端口监听状态
# ansible all -i inventory -k -a ‘bash -c "netstat -tulnp | grep -E \"(kube)|(etcd)\""‘
SSH password:
192.168.0.7 | success | rc=0 >>
tcp        0      0 192.168.0.7:7080        0.0.0.0:*               LISTEN      14486/kube-apiserve
tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      14544/kube-schedule
tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      14515/kube-controll
tcp6       0      0 :::7001                 :::*                    LISTEN      13986/etcd
tcp6       0      0 :::4001                 :::*                    LISTEN      13986/etcd
tcp6       0      0 :::8080                 :::*                    LISTEN      14486/kube-apiserve 

192.168.0.3 | success | rc=0 >>
tcp        0      0 192.168.0.3:10250       0.0.0.0:*               LISTEN      9500/kubelet
tcp6       0      0 :::46309                :::*                    LISTEN      9524/kube-proxy
tcp6       0      0 :::48500                :::*                    LISTEN      9524/kube-proxy
tcp6       0      0 :::38712                :::*                    LISTEN      9524/kube-proxy     

192.168.0.6 | success | rc=0 >>
tcp        0      0 192.168.0.6:10250       0.0.0.0:*               LISTEN      9474/kubelet
tcp6       0      0 :::52870                :::*                    LISTEN      9498/kube-proxy
tcp6       0      0 :::57961                :::*                    LISTEN      9498/kube-proxy
tcp6       0      0 :::40720                :::*                    LISTEN      9498/kube-proxy

执行下面的命令看看服务是否都是正常的

# curl -s -L http://192.168.0.7:4001/version # check etcd
etcd 0.4.6
# curl -s -L http://192.168.0.7:8080/api/v1beta1/pods  | python -m json.tool # check apiserve
{
    "apiVersion": "v1beta1",
    "creationTimestamp": null,
    "items": [],
    "kind": "PodList",
    "resourceVersion": 8,
    "selfLink": "/api/v1beta1/pods"
}
# curl -s -L http://192.168.0.7:8080/api/v1beta1/minions  | python -m json.tool # check apiserve
# curl -s -L http://192.168.0.7:8080/api/v1beta1/services  | python -m json.tool # check apiserve
# kubectl get minions
NAME
192.168.0.3
192.168.0.6

部署apache服务

首先创建一个Pod:

# cat ~/apache.json
{
  "id": "fedoraapache",
  "kind": "Pod",
  "apiVersion": "v1beta1",
  "desiredState": {
    "manifest": {
      "version": "v1beta1",
      "id": "fedoraapache",
      "containers": [{
        "name": "fedoraapache",
        "image": "fedora/apache",
        "ports": [{
          "containerPort": 80,
          "hostPort": 80
        }]
      }]
    }
  },
  "labels": {
    "name": "fedoraapache"
  }
}

# kubectl create -f apache.json
# kubectl get pod fedoraapache
NAME                IMAGE(S)            HOST                LABELS              STATUS
fedoraapache        fedora/apache       192.168.0.6/        name=fedoraapache   Waiting

# #由于镜像下载较慢,因而Waiting持续的时间会比较久,等镜像下好后就会很快起来了
# kubectl get pod fedoraapache
NAME                IMAGE(S)            HOST                LABELS              STATUS
fedoraapache        fedora/apache       192.168.0.6/        name=fedoraapache   Running

# #到192.168.0.6机器上看看容器状态
# docker ps
CONTAINER ID        IMAGE                     COMMAND             CREATED             STATUS              PORTS                NAMES
77dd7fe1b24f        fedora/apache:latest      "/run-apache.sh"    31 minutes ago      Up 31 minutes                            k8s_fedoraapache.f14c9521_fedoraapache.default.etcd_1416396375_4114a4d0
1455249f2c7d        kubernetes/pause:latest   "/pause"            About an hour ago   Up About an hour    0.0.0.0:80->80/tcp   k8s_net.e9a68336_fedoraapache.default.etcd_1416396375_11274cd2
# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
fedora/apache       latest              2e11d8fd18b3        7 weeks ago         554.1 MB
kubernetes/pause    latest              6c4579af347b        4 months ago        239.8 kB
# iptables-save | grep 2.2
-A DOCKER ! -i kbr0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.2.2:80
-A FORWARD -d 10.0.2.2/32 ! -i kbr0 -o kbr0 -p tcp -m tcp --dport 80 -j ACCEPT
# curl localhost  # 说明Pod启动OK了,并且端口也正常
Apache

Replication Controllers

Replication Controllers保证足够数量的容器运行,以便均衡负载,并保证服务高可用:

A replication controller combines a template for pod creation (a “cookie-cutter” if you will) and a number of desired replicas, into a single API object. The replica controller also contains a label selector that identifies the set of objects managed by the replica controller. The replica controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods.

# cat replica.json
{
  "id": "apacheController",
  "kind": "ReplicationController",
  "apiVersion": "v1beta1",
  "labels": {"name": "fedoraapache"},
  "desiredState": {
    "replicas": 3,
    "replicaSelector": {"name": "fedoraapache"},
    "podTemplate": {
      "desiredState": {
         "manifest": {
           "version": "v1beta1",
           "id": "fedoraapache",
           "containers": [{
             "name": "fedoraapache",
             "image": "fedora/apache",
             "ports": [{
               "containerPort": 80,
             }]
           }]
         }
       },
       "labels": {"name": "fedoraapache"},
      },
  }
}

# kubectl create -f replica.json
apacheController

# kubectl get replicationController
NAME                IMAGE(S)            SELECTOR            REPLICAS
apacheController    fedora/apache       name=fedoraapache   3

# kubectl get pod
NAME                                   IMAGE(S)            HOST                LABELS              STATUS
fedoraapache                           fedora/apache       192.168.0.6/        name=fedoraapache   Running
cf6726ae-6fed-11e4-8a06-fa163e3873e1   fedora/apache       192.168.0.3/        name=fedoraapache   Running
cf679152-6fed-11e4-8a06-fa163e3873e1   fedora/apache       192.168.0.3/        name=fedoraapache   Running

可以看到,已经有三个容器在运行了。

Services

通过Replication Controllers已经有多个Pod在运行了,但由于每个Pod都分配了不同的IP,并且随着系统运行这些IP地址有可能会变化,那问题来了,如何从外部访问这个服务呢?这就是service干的事情了。

A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them - sometimes called a micro-service. The goal of services is to provide a bridge for non-Kubernetes-native applications to access backends without the need to write code that is specific to Kubernetes. A service offers clients an IP and port pair which, when accessed, redirects to the appropriate backends. The set of pods targetted is determined by a label selector.

As an example, consider an image-process backend which is running with 3 live replicas. Those replicas are fungible - frontends do not care which backend they use. While the actual pods that comprise the set may change, the frontend client(s) do not need to know that. The service abstraction enables this decoupling.

Unlike pod IP addresses, which actually route to a fixed destination, service IPs are not actually answered by a single host. Instead, we use iptables (packet processing logic in Linux) to define “virtual” IP addresses which are transparently redirected as needed. We call the tuple of the service IP and the service port the portal. When clients connect to the portal, their traffic is automatically transported to an appropriate endpoint. The environment variables for services are actually populated in terms of the portal IP and port. We will be adding DNS support for services, too.

# cat service.json
{
  "id": "fedoraapache",
  "kind": "Service",
  "apiVersion": "v1beta1",
  "selector": {
    "name": "fedoraapache",
  },
  "protocol": "TCP",
  "containerPort": 80,
  "port": 8987
}
# kubectl create -f service.json
fedoraapache
# kubectl get service
NAME                LABELS              SELECTOR                                  IP                  PORT
kubernetes-ro                           component=apiserver,provider=kubernetes   10.254.0.2          80
kubernetes                              component=apiserver,provider=kubernetes   10.254.0.1          443
fedoraapache                            name=fedoraapache                         10.254.0.3          8987

# # 切换到minion上
# curl 10.254.0.3:8987
Apache

也可以为service配置一个公网IP,前提是要配置一个cloud provider。目前支持的cloud provider有GCE、AWS、OpenStack、ovirt、vagrant等。

For some parts of your application (e.g. your frontend) you want to expose a service on an external (publically visible) IP address. To achieve this, you can set the createExternalLoadBalancer flag on the service. This sets up a cloud provider specific load balancer (assuming that it is supported by your cloud provider) and also sets up IPTables rules on each host that map packets from the specified External IP address to the service proxy in the same manner as internal service IP addresses.

注:对Openstack的支持是使用rackspace开源的github.com/rackspace/gophercloud来做的,

Health Check

Currently, there are three types of application health checks that you can choose from:
HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise.
Container Exec - The Kubelet will execute a command inside your container. If it returns “ok” it will be considered a success.
* TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can’t it is considered a failure.
In all cases, if the Kubelet discovers a failure, the container is restarted.

The container health checks are configured in the “LivenessProbe” section of your container config. There you can also specify an “initialDelaySeconds” that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.

Here is an example config for a pod with an HTTP health check:

kind: Pod
apiVersion: v1beta1
desiredState:
manifest:
version: v1beta1
id: php
containers:
- name: nginx
image: dockerfile/nginx
ports:
- containerPort: 80
# defines the health checking
livenessProbe:
# turn on application health checking
enabled: true
type: http
# length of time to wait for a pod to initialize
# after pod startup, before applying health checking
initialDelaySeconds: 30
# an http probe
httpGet:
path: /_status/healthz
port: 8080

References

时间: 2024-08-27 07:19:10

kubernetes多节点部署解析的相关文章

二进制安装kubernetes v1.11.2 (第十一章 node节点部署)

继续前一章部署. 十一.node节点部署 kubernetes node 节点运行了如下组件: flannel docker kubelet kube-proxy 11.1 部署flanneld 请参考 第五章 11.2 安装依赖包 centos: source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh [email protected]

Kubernetes部署(六):Master节点部署

1.部署Kubernetes API服务部署 0.准备软件包 [[email protected] ~]# cd /usr/local/src/kubernetes [[email protected] kubernetes]# cp server/bin/kube-apiserver /data/kubernetes/bin/ [[email protected] kubernetes]# cp server/bin/kube-controller-manager /data/kubernet

Kubernetes二进制部署之单节点部署

K8s单节点二进制部署 K8s二进制部署搭建步骤: 1:自签ETCD证书 2:ETCD部署 3:Node安装docker 4:Flannel部署(先写入子网到etcd) ---------master---------- 5:自签APIServer证书 6:部署APIServer组件(token,csv) 7:部署controller-manager(指定apiserver证书)和scheduler组件 ----------node---------- 8:生成kubeconfig(bootst

Kubernetes集群部署DNS服务

Kubernetes集群部署DNS服务在kubernetes中每一个service都会被分配一个虚拟IP,每一个Service在正常情况下都会长时间不会改变,这个相对于pod的不定IP,对于集群中APP的使用相对是稳定的. 但是Service的信息注入到pod目前使用的是环境变量的方式,并且十分依赖于pod(rc)和service的创建顺序,这使得这个集群看起来又不那么完美,于是kubernetes以插件的方式引入了DNS系统,利用DNS对Service进行一个映射,这样我们在APP中直接使用域

Kubernetes之kubeadm部署集群

目录 Kubernetes之kubeadm部署集群 1.部署前准备 2.集群初始化 Kubernetes之kubeadm部署集群 kubeadm是Kubernetes项目自带的集群构建工具,它负责执行构建一个最小化的可用集群以及将其启动等的必要基本步骤,简单来讲,kubeadm是Kubernetes集群全生命周期的管理工具,可用于实现集群的部署.升级/降级及拆除. kubeadm集成了kubeadminit和kubeadmjoin等工具程序,其中kubeadminit用于集群的快速初始化,初始化

Kubernetes之快速部署应用

目录 Kubernetes之快速部署应用 kubectl命令介绍 kubectl run命令行部署应用 kubectl expose 通过service暴漏Pod kubectl scale 动态伸缩应用副本 kubectl set image 应用版本升级 kubectl rollout 回滚 集群外部访问Service Kubernetes之快速部署应用 kubectl命令介绍 [[email protected] ~]# kubectl --help kubectl controls th

kubernetes 集群部署

kubernetes 集群部署 环境JiaoJiao_Centos7-1(152.112) 192.168.152.112JiaoJiao_Centos7-2(152.113) 192.168.152.113JiaoJiao_Centos7-3(152.114) 192.168.152.114已开通 4C+8G+80G 集群规划 部署方式 环境准备:基于主机名称通信,时间同步,关闭firewall和iptables.service 方式一:yum ,rpm 安装.复杂. 1. etcd clus

Kubernetes实战之部署ELK Stack收集平台日志

主要内容 1 ELK概念 2 K8S需要收集哪些日志 3 ELK Stack日志方案 4 容器中的日志怎么收集 5 K8S平台中应用日志收集 准备环境 一套正常运行的k8s集群,kubeadm安装部署或者二进制部署即可 ip地址 角色 备注 192.168.73.136 nfs 192.168.73.138 k8s-master 192.168.73.139 k8s-node01 192.168.73.140 k8s-node02 1 ELK 概念 ELK是Elasticsearch.Logst

Kubernetes/2.Kubernetes基础和部署说明

Kubernetes基础和部署说明 本章节首先就基础组件.集群基础对象和控制器进行了详细的介绍和说明,然后辨析了集群网络中的三种网络和专有名词,最后关于新手部署测试和生产环境的部署要点进行了归纳说明. 基础组件 基础对象(Objects) 控制器 集群网络 部署要点 备注 基础组件 如图所示,kubernetes集群中主要分为三个组件: Master Components kube-apiserver:作为k8s控制平面的前端,也是所有请求接收的入口 etcd:k8s集群后端所有集群数据的高可用