一、Swarm简介
Swarm是Docker官方提供的一款集群管理工具,其主要作用是把若干台Docker主机抽象为一个整体,并且通过一个入口统一管理这些Docker主机上的各种Docker资源。Swarm和Kubernetes比较类似,但是更加轻便,具有的功能也较kubernetes更少一些。
Swarm 在 Docker 1.12 版本之前属于一个独立的项目,在 Docker 1.12 版本发布之后,该项目合并到了 Docker 中,成为 Docker 的一个子命令。目前,Swarm 是 Docker 社区提供的唯一一个原生支持 Docker 集群管理的工具。它可以把多个 Docker 主机组成的系统转换为单一的虚拟 Docker 主机,使得容器可以组成跨主机的子网网络。
在多台机器上部署Docker,组成一个Docker集群,并把整个集群的资源抽象成资源池,使用者部署Docker应用的时候,只需要将应用交给Swarm,Swarm会根据整个集群资源的使用情况来分配资源给部署的Docker应用,可以将这个集群的资源利用率达到最大。
二、Docker Swarm的相关概念
Swarm
Swarm运行 Docker Engine 的多个主机组成的集群。
从 v1.12 开始,集群管理和编排功能已经集成进 Docker Engine。当 Docker Engine 初始化了一个Swarm或者加入到一个存在的Swarm时,它就启动了 Swarm Mode。没启动Swarm Mode时,Docker执行的是容器命令;运行Swarm Mode后,Docker增加了编排service的能力。
Docker允许在同一个Docker主机上既运行Swarm Service,又运行单独的容器。
Node
Swarm中的每个Docker Engine都是一个node,有两种类型的 node:manager 和worker。
Manager:接收客户端服务定义,将任务发送到worker节点;维护集群期望状态和集群管理功能及Leader选举。默认情况下manager节点也会运行任务,也可以配置只做管理任务。
Worker:接收并执行从管理节点分配的任务,并报告任务当前状态,以便管理节点维护每个服务期望状态。
为了向Swarm中部署应用,我们需要在manager node上执行部署命令,manager node会将部署任务拆解并分配给一个或多个worker node完成部署。
manager node负责执行编排和集群管理工作,保持并维护Swarm处于期望的状态。Swarm中如果有多个manager node,它们会自动协商并选举出一个leader 执行编排任务。
woker node接受并执行由manager node派发的任务。默认配置下manager node同时也是一个worker node,不过可以将其配置成manager-only node,让其专职负责编排和集群管理工作。
work node会定期向manager node报告自己的状态和它正在执行的任务的状态,这样manager就可以维护整个集群的状态。
Service
service定义了worker node上要执行的任务。swarm的主要编排任务就是保证 service处于期望的状态下。
举一个service的例子:
在swarm中启动一个nginx服务,使用的镜像是 nginx:latest,副本数为3。
manager node负责创建这service,经过分析知道需要启动3个nginx容器,根据当前各worker node的状态将运行容器的任务分配下去,比如worker1上运行两个容器,worker2上运行一个容器。
运行了一段时间,worker2突然宕机了,manager监控到这个故障,于是立即在 worker3上启动了一个新的nginx容器。
这样就保证了service处于期望的三个副本状态。
三、Docker Swarm 架构
上图演示了一个标准的Docker Swarm集群结构,它可能对应了一到多台的实际服务器。每台服务器上都装有Docker并且开启了基于HTTP的DockerAPI。
这个集群中有一个SwarmManager的管理者,用来管理集群中的容器资源。管理者的管理对象不是服务器层面而是集群层面的,也就是说通过Manager,我们只能笼统地向集群发出指令而不能具体到某台具体的服务器上要干什么(这也是Swarm的根本所在)。
至于具体的管理实现方式,Manager向外暴露了一个HTTP接口,外部用户通过这个HTTP接口来实现对集群的管理。对于稍微大一点的集群,最好是拿出一台实际的服务器作为专门的管理者,作为学习而言,也可以把管理者和被管理者放在一台服务器上。
四、Docker Swarm的特性
Swarm特点:
- Docker Engine集成集群管理
使用Docker Engine CLI 创建一个Docker Engine的Swarm模式,在集群中部署应用程序服务。 - 去中心化设计
Swarm角色分为Manager和Worker节点,Manager节点故障不影响应用使用。 - 扩容缩容
可以声明每个服务运行的容器数量,通过添加或删除容器数自动调整期望的状态。 - 期望状态协调
Swarm Manager节点不断监视集群状态,并调整当前状态与期望状态之间的差异。例如,设置一个服务运行10个副本容器,如果两个副本的服务器节点崩溃,Manager将创建两个新的副本替代崩溃的副本。并将新的副本分配到可用的worker节点。 - 多主机网络
可以为服务指定overlay网络。当初始化或更新应用程序时,Swarm manager会自动为overlay网络上的容器分配IP地址。 - 服务发现
Swarm manager节点为集群中的每个服务分配唯一的DNS记录和负载均衡VIP。可以通过Swarm内置的DNS服务器查询集群中每个运行的容器。 - 负载均衡
实现服务副本负载均衡,提供入口访问。也可以将服务入口暴露给外部负载均衡器再次负载均衡。 - 安全传输
Swarm中的每个节点使用TLS相互验证和加密,确保安全的其他节点通信。 - 滚动更新
升级时,逐步将应用服务更新到节点,如果出现问题,可以将任务回滚到先前版本。
五、Docker Swarm的部署
环境准备:
节点 | IP地址 | 主机名 | 操作系统 | Docker版本号 |
---|---|---|---|---|
manager | 192.168.49.41 | docker01.contoso.com | CentOS 7.4 | 1.13.1 |
worker | 192.168.49.42 | docker02.contoso.com | CentOS 7.4 | 1.13.1 |
worker | 192.168.49.43 | docker03.contoso.com | CentOS 7.4 | 1.13.1 |
1、在manager节点上
[[email protected] ~]# docker swarm init --advertise-addr 192.168.49.41
Swarm initialized: current node (rycnd1olbld9kizgb1h5rf8rs) is now a manager.
To add a worker to this swarm, run the following command:
```
docker swarm join --token SWMTKN-1-53n4uazhxx4yf9f2wmebbm0q7nfecrchr485ar8uj1jyw9l7om-3tjc6s01ha8dh49gnkvru81uq 192.168.49.41:2377
```
To add a manager to this swarm, run ‘docker swarm join-token manager‘ and follow the instructions.
2、在work节点上
[[email protected] ~]# docker swarm join > --token SWMTKN-1-53n4uazhxx4yf9f2wmebbm0q7nfecrchr485ar8uj1jyw9l7om-3tjc6s01ha8dh49gnkvru81uq > 192.168.49.41:2377
This node joined a swarm as a worker.
[[email protected] ~]# docker swarm join > --token SWMTKN-1-53n4uazhxx4yf9f2wmebbm0q7nfecrchr485ar8uj1jyw9l7om-3tjc6s01ha8dh49gnkvru81uq > 192.168.49.41:2377
This node joined a swarm as a worker.
3、在manager上查看swarm集群状态
[[email protected] ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
fkqck55wao2dr23t9jq42e6e7 docker03.contoso.com Ready Active
q1gkyqchf5nz20qgzy2dfeyk9 docker02.contoso.com Ready Active
rycnd1olbld9kizgb1h5rf8rs * docker01.contoso.com Ready Active Leader
可以看到,docker01的MANAGER STATUS显示为Leader,也就是管理节点,而其他两个节点都作为worker节点,这样三台服务器组成的一个swarm集群就完成了。
六、Docker Swarm管理相关命令
docker swarm常用的管理命令分如下几类:
- docker swarm: 用于配置机器集群,包括管理manager和worker两类机器节点的增删
- docker node: 提供对机器节点的管理
- docker service: 提供了service创建,更新,回滚,task扩展收缩等功能
集群管理:docker swarm
用法:
? Usage: docker swarm COMMAND
命令:
? init:初始化一个swarm集群
? join:作为一个worker/管理节点加入一个集群
? join-token:管理加入swarm集群的token
? leave:离开swarm集群
? unlock:解除锁定swarm。使用一个用户提供的unlock密钥来解除一个锁定的manager节点,该命令仅用于重新激活一个设置了autolock且docker服务发生了重启的manager节点。当autolock启用后unlock密钥会打印在显示信息中,当然也可以使用docker swarm unlock-key命令获取。
? unlock-key:解除锁定swarm的key
? update:更新swarm集群
常用选项:
? docker swarm init --advertise-addr <manager_ip> # 指定初始化ip地址节点
? docker swarm init --force-new-cluster # 去除本地之外的所有管理节点
? docker swarm init --task-history-limit int # 保留任务历史记录的最大值(整数)
? docker swarm join --advertise-addr string # 指定管理节点地址
? docker swarm join --token string # 指定加入swarm集群的token
? docker swarm leave -f # 强制执行将当前节点移除swarm集群,忽略警告
节点管理:docker node
用法:
? Usage: docker node COMMAND
命令:
? demote:将一个或多个manager节点降级为worker节点
? inspect:显示一个或多个节点的详细信息
? ls:列出swarm集群中的节点
? promote:将一个或多个节点从worker节点提升为manager节点
? ps:列出在一个或多个节点上运行的任务,默认显示当前节点上运行的任务
? rm:从swarm集群中移除一个或多个节点
? update:更新一个节点
常用选项:
? docker node inspect 主机名 # 查看节点的详细信息,默认json格式
? docker node inspect --pretty 主机名 # 查看节点的相信信息,平铺格式
? docker node update --availability
? # 设置节点的状态("active"-正常,pause-暂停,drain-排除自身worker任务)
服务管理:docker service
用法:
? Usage: docker service COMMAND
命令:
? create:创建一个新的service
? inspect:显示一个或多个service的详细信息
? ls:列出所有services
? ps:列出指定service的所有任务
? rm:移除一个或多个service
? scale:扩展一个或多个service的replica数目
? update:更新指定的service
常用选项:
? docker service create --replicas 副本数 # 创建服务时指定副本个数
? docker service create --name 名字 # 创建服务时指定容器名字
? docker service create --update-delay 秒数 # 创建服务时指定每次与容器之间的更新间隔(单位是秒)
? docker service create --update-parallelism 个数 # 创建服务时指定更新时同时进行更新的数量,默认为1
? docker service create --update-failure-action 类型 # 创建服务时指定任务容器更新失败时的模式,其中pause为停止,continue为继续,默认为pause
? docker service create --rollback-mointor 秒数 # 创建服务时指定每次容器与容器之间的回滚时间间隔
? docker service create --rollback-max-failure-ratio .数值 # 创建服务时指定回滚故障率如果小于指定百分比允许继续运行 (注意数值前面的“.”,比如.2为20%)
? docker service create --network 网络名称 # 创建服务时指定docker网络
? docker service create --mount type=volume,src=[volume_name],dst=[容器目录] # 创建服务时给容器添加volume类型数据卷
? docker service create --mount type=bind,src=[宿主目录],dst=[容器目录] # 创建服务时给容器添加bind读写目录挂载
? docker service create --mount type=bind,src=[宿主目录],dst=[容器目录],readonly # 创建服务时给容器添加bind只读目录挂载
? docker service create --endpoint-mode dnsrr [service_name] # 创建服务时使用dnsrr负载均衡模式
? docker service create --config source=docker配置文件,target=配置文件路径 # 创建服务同时将docker配置文件指定到容器本地目录
? docker service create --publish 暴露端口:容器端口 [service_name] # 创建服务时将容器的指定端口暴露出来
? docker service inspect --pretty [service_name] # 以平铺格式查看服务详细信息
? docker service logs # 查看服务内日志输出
? docker service ls # 列出服务
? docker service ps [service_name] # 查看服务启动的所有任务
? docker service ps -f "desired-state=running" [service_name] # 查看服务正在运行的任务
? docker service scale [service_name]=数量 # 扩展服务容器副本数量
? docker service update --args "指令" [service_name] # 给容器添加指令
? docker service update --image 更新版本 [service_name] # 更新服务容器镜像版本
? docker service update --rollback [service_name] # 回滚服务容器版本
? docker service update --network-add 网络名称 [service_name] # 更新服务容器添加网络
? docker service update --network-rm 网络名称 [service_name] # 更新服务容器删除网络
? docker service update --publish-add 暴露端口:容器端口 [service_name] # 更新服务添加容器暴露端口
? docker service update --publish-rm 暴露端口:容器端口 [service_name] # 更新服务移除容器暴露端口
? docker service update --endpoint-mode dnsrr [service_name] # 更新服务修改负载均衡模式为dnsrr
? docker service update --config-add 配置文件名称,target=容器配置文件位置 [service_name] # 更新服务添加新的配置文件到容器中
? docker service update --config-rm 配置文件名称 [service_name] # 更新服务删除配置文件
? docker service update --force [service_name] # 强制更新服务
七、Docker Swarm服务管理示例
创建服务
创建一个nginx服务:
[[email protected] ~]# docker service create --name nginx nginx:latest
0q1wk3uw6pmaq1m61eooo5ybv
查看nginx服务运行的任务:
[[email protected] ~]# docker service ps nginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
y418x232aqd4 nginx.1 nginx:latest docker01.contoso.com Running Running 9 minutes ago
查看nginx服务的详细信息:
[[email protected] ~]# docker service inspect nginx
[
{
"ID": "0q1wk3uw6pmaq1m61eooo5ybv",
"Version": {
"Index": 21
},
"CreatedAt": "2019-05-02T12:39:33.903002904Z",
"UpdatedAt": "2019-05-02T12:39:33.903002904Z",
"Spec": {
"Name": "nginx",
"TaskTemplate": {
"ContainerSpec": {
"Image": "nginx:[email protected]:e71b1bf4281f25533cf15e6e5f9be4dac74d2328152edf7ecde23abc54e16c1c",
"DNSConfig": {}
},
"Resources": {
"Limits": {},
"Reservations": {}
},
"RestartPolicy": {
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {},
"ForceUpdate": 0
},
"Mode": {
"Replicated": {
"Replicas": 1
}
},
"UpdateConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"MaxFailureRatio": 0
},
"EndpointSpec": {
"Mode": "vip"
}
},
"Endpoint": {
"Spec": {}
},
"UpdateStatus": {
"StartedAt": "0001-01-01T00:00:00Z",
"CompletedAt": "0001-01-01T00:00:00Z"
}
}
]
扩展nginx服务的数量:
[[email protected] ~]# docker service scale nginx=3
nginx scaled to 3
查看当前运行的所有服务:
[[email protected] ~]# docker service ls
ID NAME MODE REPLICAS IMAGE
0q1wk3uw6pma nginx replicated 3/3 nginx:latest
更新nginx服务指令:
[[email protected] ~]# docker service update --args "ping www.baidu.com" nginx
nginx
更新nginx服务添加80端口映射:
[[email protected] ~]# docker service update --publish-add 8080:80 nginx
nginx
更新nginx服务移除80端口映射:
[[email protected] ~]# docker service update --publish-rm 8080:80 nginx
nginx
删除nginx服务:
[[email protected] ~]# docker service rm nginx
nginx
滚动更新
创建服务时设定更新策略:
[[email protected] ~]# docker service create --name myredis --replicas 6 --update-delay 10s --update-parallelism 2 --update-failure-action continue redis:3.0.6
swlovrpgox3gl6cs3b94me4ws
查看服务运行情况:
[[email protected] ~]# docker service ls
ID NAME MODE REPLICAS IMAGE
swlovrpgox3g myredis replicated 6/6 redis:3.0.6
[[email protected] ~]# docker service ps myredis
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
w42c1giiahwe myredis.1 redis:3.0.6 docker01.contoso.com Running Running 49 seconds ago
j6i3igrlv16o myredis.2 redis:3.0.6 docker03.contoso.com Running Running 51 seconds ago
cp0w3o0t4ei0 myredis.3 redis:3.0.6 docker02.contoso.com Running Running 50 seconds ago
go26b31pw6lp myredis.4 redis:3.0.6 docker01.contoso.com Running Running 50 seconds ago
4nwbkvr1pl4c myredis.5 redis:3.0.6 docker03.contoso.com Running Running 50 seconds ago
t5h4lclip9uz myredis.6 redis:3.0.6 docker02.contoso.com Running Running 51 seconds ago
手动更新服务:
[[email protected] ~]# docker service update --image redis:3.0.7 myredis
myredis
查看更新过程:
[[email protected] ~]# docker service ps myredis
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
w42c1giiahwe myredis.1 redis:3.0.6 docker01.contoso.com Running Running 2 minutes ago
3930onk71uav myredis.2 redis:3.0.7 docker03.contoso.com Ready Ready less than a second ago
j6i3igrlv16o \_ myredis.2 redis:3.0.6 docker03.contoso.com Shutdown Running less than a second ago
nol40hkh9g56 myredis.3 redis:3.0.7 docker02.contoso.com Running Running 11 seconds ago
cp0w3o0t4ei0 \_ myredis.3 redis:3.0.6 docker02.contoso.com Shutdown Shutdown 11 seconds ago
q0oqrwrbju3n myredis.4 redis:3.0.7 docker03.contoso.com Running Ready less than a second ago
go26b31pw6lp \_ myredis.4 redis:3.0.6 docker01.contoso.com Shutdown Shutdown less than a second ago
jdqbfpf64xsu myredis.5 redis:3.0.7 docker02.contoso.com Running Running 11 seconds ago
4nwbkvr1pl4c \_ myredis.5 redis:3.0.6 docker03.contoso.com Shutdown Shutdown 11 seconds ago
t5h4lclip9uz myredis.6 redis:3.0.6 docker02.contoso.com Running Running 2 minutes ago
[[email protected] ~]# docker service ps -f "desired-state=running" myredis
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
vnjs46emceib myredis.1 redis:3.0.7 docker01.contoso.com Running Running 2 minutes ago
3930onk71uav myredis.2 redis:3.0.7 docker03.contoso.com Running Running 2 minutes ago
nol40hkh9g56 myredis.3 redis:3.0.7 docker02.contoso.com Running Running 2 minutes ago
q0oqrwrbju3n myredis.4 redis:3.0.7 docker03.contoso.com Running Running 2 minutes ago
jdqbfpf64xsu myredis.5 redis:3.0.7 docker02.contoso.com Running Running 2 minutes ago
urp2fndjogvb myredis.6 redis:3.0.7 docker01.contoso.com Running Running 2 minutes ago
动态回滚
创建服务时设定回滚策略:
[[email protected] ~]# docker service create --name myredis --replicas 6 --update-parallelism 2 --update-monitor 20s --update-max-failure-ratio .2 redis:3.0.6
fmfrmn6bjvf3rhe14yleioruq
注意:原有的--rollback-parallelism和--rollback-monitor都已合并到--update-*中了,所以如果使用--rollback相关参数会提示unknow flag。
查看服务状态:
[[email protected] ~]# docker service ps myredis
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
mj4r36nbqyfv myredis.1 redis:3.0.6 docker01.contoso.com Running Running 2 minutes ago
y6ddrzvqdq6r myredis.2 redis:3.0.6 docker03.contoso.com Running Running 2 minutes ago
3j831bwd885p myredis.3 redis:3.0.6 docker02.contoso.com Running Running 2 minutes ago
o1hia18mi13u myredis.4 redis:3.0.6 docker01.contoso.com Running Running 2 minutes ago
jatwoh12s9jr myredis.5 redis:3.0.6 docker03.contoso.com Running Running 2 minutes ago
ldo6bxes2itv myredis.6 redis:3.0.6 docker02.contoso.com Running Running 2 minutes ago
手动执行更新命令:
[[email protected] ~]# docker service update --image redis:3.0.7 myredis
myredis
查看更新结果:
[[email protected] ~]# docker service ps -f "desired-state=running" myredis
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
qb9ydmhvx2g0 myredis.1 redis:3.0.7 docker02.contoso.com Running Running 50 seconds ago
wun44p8i2i0j myredis.2 redis:3.0.7 docker03.contoso.com Running Running 53 seconds ago
q94nwkicn56j myredis.3 redis:3.0.7 docker01.contoso.com Running Running 52 seconds ago
m5mnwanemoi5 myredis.4 redis:3.0.7 docker03.contoso.com Running Running 53 seconds ago
b8jy2aeclv64 myredis.5 redis:3.0.7 docker02.contoso.com Running Running 51 seconds ago
9bb1k1nfaol4 myredis.6 redis:3.0.7 docker02.contoso.com Running Running 50 seconds ago
手动执行回滚命令:
[[email protected] ~]# docker service update --rollback --update-delay 0s myredis
myredis
查看回滚结果:
[[email protected] ~]# docker service ps -f "desired-state=running" myredis
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
xiwpjue0fm1k myredis.1 redis:3.0.6 docker03.contoso.com Running Running 50 seconds ago
ggkntqx0l7bq myredis.2 redis:3.0.6 docker03.contoso.com Running Running 49 seconds ago
l7kv1hat1z8s myredis.3 redis:3.0.6 docker01.contoso.com Running Running 51 seconds ago
ql31hdzcp11h myredis.4 redis:3.0.6 docker01.contoso.com Running Running 52 seconds ago
ts7kyklkrc9h myredis.5 redis:3.0.6 docker02.contoso.com Running Running 53 seconds ago
feffo49ulnlk myredis.6 redis:3.0.6 docker02.contoso.com Running Running 53 seconds ago
使用Overlay网络
创建overlay网络:
[[email protected] ~]# docker network create --driver overlay myovl
mqptdqjx30cpra84wfu2akbxg
创建新服务并使用overlay网络:
[[email protected] ~]# docker service create --replicas 3 --network myovl --name bbox01 busybox:latest ping www.baidu.com
s7tttmyqw1c5zvvkzc339as1w
查看服务状态:
[[email protected] ~]# docker service ps bbox01
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
r9k6izleq2g5 bbox01.1 busybox:latest docker01.contoso.com Running Running 10 seconds ago
yaevcojb17k1 bbox01.2 busybox:latest docker03.contoso.com Running Running 10 seconds ago
xeiju66hoei0 bbox01.3 busybox:latest docker02.contoso.com Running Running 10 seconds ago
在节点上查看服务中容器的IP地址:
# 节点1(manager节点)
[[email protected] ~]# docker container ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4c1244d20495 [email protected]:954e1f01e80ce09d0887ff6ea10b13a812cb01932a0781d6b0cc23f743a874fd "ping www.baidu.com" 2 minutes ago Up 2 minutes bbox01.1.r9k6izleq2g55hbdgnj67ddwg
[[email protected] ~]# docker exec -it 4c1244d20495 /bin/sh
/ # ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:0A:00:00:04
inet addr:10.0.0.4 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::42:aff:fe00:4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:13 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1038 (1.0 KiB) TX bytes:648 (648.0 B)
# 节点2(worker节点)
[[email protected] ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5a28a0d5aec1 [email protected]:954e1f01e80ce09d0887ff6ea10b13a812cb01932a0781d6b0cc23f743a874fd "ping www.baidu.com" About a minute ago Up About a minute bbox01.3.xeiju66hoei0ooc8i46pmekcv
[[email protected] ~]# docker exec -it 5a28a0d5aec1 /bin/sh
/ # ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:0A:00:00:03
inet addr:10.0.0.3 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::42:aff:fe00:3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:14 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1128 (1.1 KiB) TX bytes:648 (648.0 B)
# 节点3(worker节点)
[[email protected] ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b3fe85292655 [email protected]:954e1f01e80ce09d0887ff6ea10b13a812cb01932a0781d6b0cc23f743a874fd "ping www.baidu.com" 2 minutes ago Up 2 minutes bbox01.2.yaevcojb17k1bzphmj20mr9rh
[[email protected] ~]# docker exec -it b3fe85292655 /bin/sh
/ # ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:0A:00:00:05
inet addr:10.0.0.5 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::42:aff:fe00:5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:14 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1128 (1.1 KiB) TX bytes:648 (648.0 B)
测试网络是否可以通信:
[[email protected] ~]# docker exec -it 4c1244d20495 /bin/sh
/ # ping 10.0.0.3 -c2 -w2
PING 10.0.0.3 (10.0.0.3): 56 data bytes
64 bytes from 10.0.0.3: seq=0 ttl=64 time=5.653 ms
64 bytes from 10.0.0.3: seq=1 ttl=64 time=0.354 ms
--- 10.0.0.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.354/3.003/5.653 ms
/ # ping 10.0.0.5 -c2 -w2
PING 10.0.0.5 (10.0.0.5): 56 data bytes
64 bytes from 10.0.0.5: seq=0 ttl=64 time=21.347 ms
64 bytes from 10.0.0.5: seq=1 ttl=64 time=0.320 ms
--- 10.0.0.5 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.320/10.833/21.347 ms
集群数据持久化
1)使用数据卷(默认方式)
创建服务时挂载数据卷:
[[email protected] ~]# docker service create --mount type=volume,src=test,dst=/data --name bbox02 busybox ping www.baidu.com
9qr8s9vha2ywa7becvqfukwxv
查看服务运行在哪个节点上:
[[email protected] ~]# docker service ps bbox02
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
azwf00koeug2 bbox02.1 busybox:latest docker03.contoso.com Running Running 26 seconds ago
到对应的节点上:
[[email protected] ~]# docker volume ls
DRIVER VOLUME NAME
local test
在docker数据卷中创建一个文件:
[[email protected] ~]# echo "hello" >> /var/lib/docker/volumes/test/_data/hello.txt
[[email protected] ~]# cat /var/lib/docker/volumes/test/_data/hello.txt
hello
进入容器内部查看:
[[email protected] ~]# docker exec -it $(docker ps -a |grep bbox02|awk ‘{print $1}‘) /bin/sh
/ # ls /data
hello.txt
/ # cat /data/hello.txt
hello
2)使用nfs共享存储
在节点1创建nfs存储:
[[email protected] ~]# yum -y install nfs-utils
[[email protected] ~]# vi /etc/exports
[[email protected] ~]# cat /etc/exports
/data/test 192.168.49.0/24(rw)
[[email protected] ~]# mkdir -p /data/test
[[email protected] ~]# echo "Own by nfs." >> /data/test/readme.txt
[[email protected] ~]# chmod -R 777 /data/test
[[email protected] ~]# systemctl start rpcbind
[[email protected] ~]# systemctl start nfs
在其他两个节点上安装nfs客户端:
[[email protected] ~]# yum -y install nfs-utils
[[email protected] ~]# systemctl start rpcbind
[[email protected] ~]# systemctl nfs
[[email protected] ~]# yum -y install nfs-utils
[[email protected] ~]# systemctl start rpcbind
[[email protected] ~]# systemctl nfs
创建服务时挂载nfs共享存储:
[[email protected] ~]# docker service create --mount ‘type=volume,src=nfs-dir,dst=/data,volume-driver=local,volume-opt=type=nfs,volume-opt=device=192.168.49.41:/data/test,"volume-opt=o=addr=192.168.49.41,vers=4,soft,timeo=180,bg,tcp,rw"‘ --name bbox03 busybox ping www.baidu.com
cq3yg35fnpy5nrwuc2chmh366
查看服务运行在哪个节点上:
[[email protected] ~]# docker service ps bbox03
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
msbvkm3s3388 bbox03.1 busybox:latest docker02.contoso.com Running Running 2 minutes ago
到对应的节点上查看:
[[email protected] ~]# docker volume ls
DRIVER VOLUME NAME
local nfs-dir
[[email protected] ~]# docker volume inspect nfs-dir
[
{
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/nfs-dir/_data",
# 服务运行节点上数据的存放位置
"Name": "nfs-dir",
"Options": {
"device": "192.168.49.41:/data/test",
"o":
# nfs共享存储位置,与其他节点共享 "addr=192.168.49.41,vers=4,soft,timeo=180,bg,tcp,rw",
"type": "nfs"
},
"Scope": "local"
}
]
测试数据是否同步:
[[email protected] ~]# docker exec -it $(docker ps -a |grep bbox03|awk ‘{print $1}‘) /bin/sh
/ # ls /data
readme.txt
/ # cat /data/readme.txt
Own by nfs.
尝试在容器中创建新的文件:
[[email protected] ~]# docker exec -it $(docker ps -a |grep bbox03|awk ‘{print $1}‘) /bin/sh
/ # echo "Added by container bbox03." >> /data/newfile
到nfs节点主机上查看:
[[email protected] ~]# cat /data/test/newfile
Added by container bbox03.
说明nfs共享存储上的文件已经同步。
八、Docker Swarm高级功能
负载均衡
Swarm模式内置DNS组件,可以自动为集群中的每个服务分配DNS记录。 Swarm manager使用内部负载均衡,根据服务的DNS名称在集群内的服务之间分发请求。
Swarm manager使用 ingress load blancing暴露你想从外部访问集群提供的服务。 Swarm manager自动为服务分配一个范围30000-32767端口的Published Port, 也可以为该服务指定一个Published Port。
ingress network是一个特殊的overlay网络,便于服务的节点直接负载均衡。当任何swarm节点在已发布的端口上接收到请求时,它将该请求转发给调用的IPVS模块, IPVS跟踪参与该服务的所有容器IP地址,选择其中一个,并通过ingress network将请求路由给它。
Docker Swarm负载均衡分为两种模式:
- VIP:分配独立的虚拟IP,DNS记录解析到服务名中作为代理IP。
- dnsrr:DNS记录不解析VIP,而去解析每个容器内的IP。dnsrr模式不支持端口对外暴露。
1)VIP模式
创建一个nginx服务:
[[email protected] ~]# docker service create --replicas 3 --name mynginx --network myovl --publish 8080:80 mynginx:v1
ialfxo51tn9d64h3lbmu6k8i0
[[email protected] ~]# docker service ps mynginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
ig9a0v0c9k94 mynginx.1 mynginx:v1 docker03.contoso.com Running Running 37 seconds ago
pffqk518icd8 mynginx.2 mynginx:v1 docker02.contoso.com Running Running 28 seconds ago
oqv4q2775ju0 mynginx.3 mynginx:v1 docker01.contoso.com Running Running 34 seconds ago
再创建一个centos服务:
[[email protected] ~]# docker service create --name mycentos --network myovl 192.168.49.40:5000/centos_ssh:v1.0
uacfpcgt016q6jco0jhyn1ehq
[[email protected] ~]# docker service ps mycentos
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
wuxxyoccd8fv mycentos.1 192.168.49.40:5000/centos_ssh:v1.0 docker03.contoso.com Running Running 46 seconds ago
进入mycentos服务的容器中查看:
[[email protected] ~]# docker exec -it $(docker ps -a |grep mycentos.1|awk ‘{print $1}‘) /bin/bash
[[email protected] /]# nslookup mynginx
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: mynginx
Address: 10.0.0.2
[[email protected] /]# dig A +noall +answer mynginx
mynginx. 600 IN A 10.0.0.2
在manage节点上也可以查看mynginx的dns解析地址:
[[email protected] ~]# docker service inspect mynginx -f ‘{{.Endpoint.VirtualIPs}}‘
[{w7nsefcod7xfjio63acqqb2bw 10.255.0.2/16} {mqptdqjx30cpra84wfu2akbxg 10.0.0.2/24}]
这里看到mynginx服务解析的IP地址是10.0.0.2, 我们再到swarm各个节点上查看服务包含的容器的IP地址:
# 在manager节点上
[[email protected] ~]# docker exec -it $(docker ps -a |awk ‘$2~/mynginx:v1/{print $1}‘) /sbin/ifconfig|grep -A1 eth2
eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.0.0.5 netmask 255.255.255.0 broadcast 0.0.0.0
# 在节点2(worker节点)上
[[email protected] ~]# docker exec -it $(docker ps -a |awk ‘$2~/mynginx:v1/{print $1}‘) /sbin/ifconfig|grep -A1 eth2
eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.0.0.4 netmask 255.255.255.0 broadcast 0.0.0.0
# 在节点3(worker节点)上
[[email protected] ~]# docker exec -it $(docker ps -a |awk ‘$2~/mynginx:v1/{print $1}‘) /sbin/ifconfig|grep -A1 eth2
eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.0.0.3 netmask 255.255.255.0 broadcast 0.0.0.0
可见,mynginx服务中的容器并没有一个IP地址是10.0.0.2,而服务的dns解析地址却是10.0.0.2,我们尝试用在同一个网络中的mycentos服务中的容器进行访问:
[[email protected] ~]# docker exec -it $(docker ps -a |grep mycentos.1|awk ‘{print $1}‘) /bin/bash
[[email protected] /]# curl mynginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[[email protected] /]# curl 10.0.0.4
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
可见使用服务名称去访问和使用容器IP获取的内容一样,说明负载均衡是生效的。
尝试修改容器内部的index.html内容,不同的容器改做不同的内容:
[[email protected] ~]# echo "manager01" >> index.html
[[email protected] ~]# docker cp index.html $(docker ps -a |awk ‘$2~/mynginx:v1/{print $1}‘):/usr/local/nginx/html
[[email protected] ~]# echo "worker01" >> index.html
[[email protected] ~]# docker cp index.html $(docker ps -a |awk ‘$2~/mynginx:v1/{print $1}‘):/usr/local/nginx/html
[[email protected] ~]# echo "worker02" >> index.html
[[email protected] ~]# docker cp index.html $(docker ps -a |awk ‘$2~/mynginx:v1/{print $1}‘):/usr/local/nginx/html
因为我在创建服务的时候已经暴露了8080端口,这次直接在宿主机(swarm节点)上进行测试:
# manager节点上
[[email protected] ~]# for i in `seq 10`;do curl http://127.0.0.1:8080;done
manager01
worker01
worker02
manager01
worker01
worker02
manager01
worker01
worker02
manager01
# 节点2上
[[email protected] ~]# for i in `seq 10`;do curl http://127.0.0.1:8080;done
worker02
manager01
worker01
worker02
manager01
worker01
worker02
manager01
worker01
worker02
# 节点3上
[[email protected] ~]# for i in `seq 10`;do curl http://127.0.0.1:8080;done
worker01
worker02
manager01
worker01
worker02
manager01
worker01
worker02
manager01
worker01
到此,docker swarm服务基于VIP的负载均衡功能稳定,而且实现正常。
2)dnsrr模式
创建一个myweb服务:
[[email protected] ~]# docker service create --replicas 3 --name myweb --network myovl --endpoint-mode dnsrr mynginx:v1
fovdvwdvtbfscatve68e1ank6
查看myweb服务运行在哪些节点上;
[[email protected] ~]# docker service ps myweb
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
zqityfx1ma6m myweb.1 mynginx:v1 docker02.contoso.com Running Running 4 minutes ago
uuj9y2clos52 myweb.2 mynginx:v1 docker01.contoso.com Running Running 4 minutes ago
1ex0vt8poa8s myweb.3 mynginx:v1 docker03.contoso.com Running Running 4 minutes ago
同样的,我们到处于同一网络的mycentos容器上去查看myweb的dns记录解析:
[[email protected] ~]# docker exec -it $(docker ps -a |grep mycentos.1|awk ‘{print $1}‘) /bin/bash
[[email protected] /]# nslookup myweb
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: myweb
Address: 10.0.0.9
Name: myweb
Address: 10.0.0.8
Name: myweb
Address: 10.0.0.10
同样,我们尝试去ping服务myweb:
[[email protected] /]# ping myweb -c2 -w2
PING myweb (10.0.0.8) 56(84) bytes of data.
64 bytes from myweb.1.zqityfx1ma6mzxi6cexyzmoej.myovl (10.0.0.8): icmp_seq=1 ttl=64 time=0.352 ms
64 bytes from myweb.1.zqityfx1ma6mzxi6cexyzmoej.myovl (10.0.0.8): icmp_seq=2 ttl=64 time=0.675 ms
--- myweb ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.352/0.513/0.675/0.163 ms
[[email protected] /]# ping myweb -c2 -w2
PING myweb (10.0.0.9) 56(84) bytes of data.
64 bytes from myweb.2.uuj9y2clos52lrubobdvr10fg.myovl (10.0.0.9): icmp_seq=1 ttl=64 time=0.398 ms
64 bytes from myweb.2.uuj9y2clos52lrubobdvr10fg.myovl (10.0.0.9): icmp_seq=2 ttl=64 time=0.274 ms
--- myweb ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
[[email protected] /]# ping myweb -c2 -w2
PING myweb (10.0.0.10) 56(84) bytes of data.
64 bytes from myweb.3.1ex0vt8poa8s424tp5z9ibpur.myovl (10.0.0.10): icmp_seq=1 ttl=64 time=0.059 ms
64 bytes from myweb.3.1ex0vt8poa8s424tp5z9ibpur.myovl (10.0.0.10): icmp_seq=2 ttl=64 time=0.081 ms
--- myweb ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.059/0.070/0.081/0.011 ms
这里也可以看到,基于DNS的负载均衡也没有问题,对服务的访问会采用dns轮询的方式,每次随机分配到其中的容器节点上。
比较Docker Swarm的两种负载均衡模式,两种模式都有各自的应用场景,但是VIP模式更适合生产环境,在集群中创建服务并扩容多个容器的情况下,容器会有一个统一的VIP,且该VIP不会随意发生变化,这样我们仅需将服务启动时暴露端口到宿主机的指定端口,同时在宿主机网络中另外启动其他的负载均衡服务(如nginx, haproxy等),再将该节点的端口proxy到swarm集群中节点的指定端口上,这样就实现了多层负载均衡,且容器个数可以随意伸缩,灵活性大大提高。
而使用基于DNS的负载均衡,因为采用轮询的方式,会将请求随机分配到容器中,对于某些会缓存DNS到IP地址映射的应用程序,可能会导致在服务中容器发生变动(如销毁、重启等)导致超时,所以DNS解析一旦发生变化就需考虑是否会影响到其他服务,如需使用要考虑到这一点。
节点高可用性
查看当前的节点状态:
[[email protected] ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
fkqck55wao2dr23t9jq42e6e7 docker03.contoso.com Ready Active
q1gkyqchf5nz20qgzy2dfeyk9 docker02.contoso.com Ready Active
rycnd1olbld9kizgb1h5rf8rs * docker01.contoso.com Ready Active Leader
[[email protected] ~]# docker service ls
ID NAME MODE REPLICAS IMAGE
fovdvwdvtbfs myweb replicated 3/3 mynginx:v1
ialfxo51tn9d mynginx replicated 3/3 mynginx:v1
uacfpcgt016q mycentos replicated 1/1 192.168.49.40:5000/centos_ssh:v1.0
关闭manager节点,检查容器是否仍然运行:
# 节点2
[[email protected] ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c74b73a90f97 mynginx:v1 "./sbin/nginx -g ‘..." 48 minutes ago Up 48 minutes 80/tcp myweb.1.zqityfx1ma6mzxi6cexyzmoej
0f4ceba39262 mynginx:v1 "./sbin/nginx -g ‘..." 4 hours ago Up 4 hours 80/tcp mynginx.2.pffqk518icd8coz9ed5u0u1y5
# 节点3
[[email protected] ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e27dc98de27b mynginx:v1 "./sbin/nginx -g ‘..." 48 minutes ago Up 48 minutes 80/tcp myweb.3.1ex0vt8poa8s424tp5z9ibpur
29870e8a8752 192.168.49.40:5000/centos_ssh:v1.0 "/usr/sbin/sshd -D" 3 hours ago Up 3 hours 22/tcp mycentos.1.wuxxyoccd8fv5d932phnab9c4
60071b149e3f mynginx:v1 "./sbin/nginx -g ‘..." 4 hours ago Up 4 hours 80/tcp mynginx.1.ig9a0v0c9k94a9qvtp6kk7cgy
再检查服务是否可用:
[[email protected] ~]# for i in `seq 10`;do curl http://127.0.0.1:8080;done
worker01
worker02
worker01
worker02
worker01
worker02
worker01
worker02
worker01
worker02
[[email protected] ~]# for i in `seq 10`;do curl http://127.0.0.1:8080;done
worker02
worker01
worker02
worker01
worker02
worker01
worker02
worker01
worker02
worker01
重新启动manager节点后:
[[email protected] ~]# for i in `seq 10`;do curl http://127.0.0.1:8080;done
worker02
manager01
worker01
worker02
manager01
worker01
worker02
manager01
worker01
worker02
尝试将节点2和节点3提升为manager节点:
[[email protected] ~]# docker node promote q1gkyqchf5nz20qgzy2dfeyk9
Node q1gkyqchf5nz20qgzy2dfeyk9 promoted to a manager in the swarm.
[[email protected] ~]# docker node promote fkqck55wao2dr23t9jq42e6e7
Node fkqck55wao2dr23t9jq42e6e7 promoted to a manager in the swarm.[[email protected] ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
fkqck55wao2dr23t9jq42e6e7 docker03.contoso.com Ready Active Reachable
q1gkyqchf5nz20qgzy2dfeyk9 docker02.contoso.com Ready Active Reachable
rycnd1olbld9kizgb1h5rf8rs * docker01.contoso.com Ready Active Leader
再次尝试关闭manager节点:
[[email protected] ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
fkqck55wao2dr23t9jq42e6e7 docker03.contoso.com Ready Active Reachable
q1gkyqchf5nz20qgzy2dfeyk9 * docker02.contoso.com Ready Active Leader
rycnd1olbld9kizgb1h5rf8rs docker01.contoso.com Ready Active Unreachable
此时服务的状态:
[[email protected] ~]# docker service ls
ID NAME MODE REPLICAS IMAGE
fovdvwdvtbfs myweb replicated 3/3 mynginx:v1
ialfxo51tn9d mynginx replicated 3/3 mynginx:v1
uacfpcgt016q mycentos replicated 1/1 192.168.49.40:5000/centos_ssh:v1.0
恢复manager节点:
[[email protected] ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
fkqck55wao2dr23t9jq42e6e7 docker03.contoso.com Ready Active Leader
q1gkyqchf5nz20qgzy2dfeyk9 docker02.contoso.com Ready Active Reachable
rycnd1olbld9kizgb1h5rf8rs * docker01.contoso.com Ready Active Reachable
服务状态:
[[email protected] ~]# docker service ls
ID NAME MODE REPLICAS IMAGE
fovdvwdvtbfs myweb replicated 3/3 mynginx:v1
ialfxo51tn9d mynginx replicated 3/3 mynginx:v1
uacfpcgt016q mycentos replicated 1/1 192.168.49.40:5000/centos_ssh:v1.0
测试服务是否可用:
[[email protected] ~]# for i in `seq 10`;do curl http://127.0.0.1:8080;done
worker02
worker01
worker01
worker02
worker01
worker01
worker02
worker01
worker01
worker02
从以上实验中也可以看出,只要管理节点正常,服务的高可用就能达到,但是管理节点的个数必须保持在总的管理节点个数的一半以上,即3个只允许宕机一台,5台只允许宕机2台。而总的管理节点的个数一般不超过7个,如果太多的话,集群内管理节点通信消耗会比较大。由于偶数性价比不高(因为4台也只能宕机掉1台跟3台时是一样的),所以管理节点的个数一般都是奇数。
管理节点的个数以及允许宕机的个数如下:
mannager node | 允许宕机个数 | 服务运行状态 |
---|---|---|
3 | 1 | 正常 |
5 | 2 | 正常 |
7 | 3 | 正常 |
... | ... | ... |
n | (n+1)/2 | 正常 |
故障恢复:
如果swarm失去法定人数,swarm不能自动恢复,工作节点上的任务继续运行,不受影响,但无法执行管理任务,包括扩展或更新服务,加入或删除节点。恢复的最佳方式是将丢失的leader节点重新联机。如果不可能,唯一方法是使用—force-new- cluster管理节点的操作,这将去除本机之外的所有管理器身份。
Docker Swarm节点高可用结构图如下:
Docker Stack编排
stack是一组相互关联的服务,它们共享依赖关系,并且可以一起orchestrated(编排)和缩放。单个stack能够定义和协调整个应用程序的功能(尽管非常复杂的应用程序可能希望使用多个堆栈),stack 是构成特定环境中的 service 集合, 它是自动部署多个相互关联的服务的简便方法,而无需单独定义每个服务。
stack file 是一种 yaml 格式的文件,类似于 docker-compose.yml 文件,它定义了一个或多个服务,并定义了服务的环境变量、部署标签、容器数量以及相关的环境特定配置等。
Docker Stack的yaml和docker compose很相似,绝大部分选项都可以共用,以下选项除外:
- build
- cgroup_parent
- container_name
- devices
- dns
- dns_search
- tmpfs
- external_links
- links
- network_mode
- security_opt
- stop_signal
- sysctls
- userns_mode
Docker Stack常用命令:
- docker stack deploy:部署新的堆栈或更新现有堆栈
- docker stack ls:列出现有堆栈
- docker stack ps:列出堆栈中的任务
- docker stack rm:删除一个或多个堆栈
- docker stack services:列出堆栈中的服务
Docker Stack和Docker Compose的区别:
- Docker stack会忽略了“构建”指令,无法使用stack命令构建新镜像,它是需要镜像是预先已经构建好的。 所以docker-compose更适合于开发场景;
- Docker Compose是一个Python项目,在内部,它使用Docker API规范来操作容器。所以需要安装Docker -compose,以便与Docker一起在您的计算机上使用;
- Docker Stack功能包含在Docker引擎中。你不需要安装额外的包来使用它,docker stacks 只是swarm mode的一部分。
Docker stack不支持基于第2版写的docker-compose.yml ,也就是version版本至少为3。然而Docker Compose对版本为2和3的 文件仍然可以处理; - docker stack把docker compose的所有工作都做完了,因此docker stack将占主导地位。同时,对于大多数用户来说,切换到使用docker stack既不困难,也不需要太多的开销。如果您是Docker新手,或正在选择用于新项目的技术,请使用docker stack。
Docker Stack服务编排示例:
下面使用Docker Stack编排一个wordpress服务,当然wordpress主要依赖于LNMP环境,另外就是因为之前的文章中已经使用Docker Compose实现过wordpress搭建,所以这里改用Docker Stack编排,更能对比二者的差别,方便大家在后续工作中根据情况选用不同的方式。
首先,先看一下目录结构:
[[email protected] services]# tree -L 2 .
.
├── dbdata
├── webcontent
│?? ├── nginxconf
│?? └── wordpress
└── wordpress.yml
4 directories, 1 files
这里的目录内容是从前面docker compose那里修改生成了,文件内容基本不变,只是新增了一个wordpress.yml。这里看一下wordpress.yml的内容:
[[email protected] services]# cat wordpress.yml
version: ‘3‘
services:
mysqldb:
image: mysql:5.6
ports:
- 3306:3306
networks:
- wps_net
volumes:
- ./dbdata:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: 123456
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
php:
image: wordpress_php-cgi:latest
networks:
- wps_net
volumes:
- ./webcontent/wordpress:/usr/local/nginx/html
deploy:
mode: replicated
replicas: 12
nginx:
image: wordpress_nginx:latest
ports:
- 8080:80
networks:
- wps_net
volumes:
- ./webcontent/nginxconf:/usr/local/nginx/conf.d
- ./webcontent/wordpress:/usr/local/nginx/html
deploy:
mode: replicated
replicas: 3
depends_on:
- mysqldb
- php
networks:
wps_net:
driver: overlay
再看一下nginx中关于wordpress的配置:
[[email protected] services]# cat webcontent/nginxconf/wordpress.conf
server {
listen 80;
server_name localhost;
server_tokens off;
access_log logs/access.log main;
error_log logs/error.log;
root /usr/local/nginx/html;
index index.php;
location ~ .*\.(php|php5)?$ {
fastcgi_pass php:9000;
include /usr/local/nginx/conf/fastcgi.conf;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PHP_VALUE "include_path=/usr/local/php:/usr/local/nginx/html:.";
}
}
其他的,webcontent/wordpress里为wordpress源码包加压后的文件,不过为了减少后续的配置步骤,这里需要修改webcontent/wordpress/wp-config.php文件,修改如下选项为下面列举的值:
define(‘DB_NAME‘, ‘wordpress‘);
define(‘DB_USER‘, ‘wordpress‘);
define(‘DB_PASSWORD‘, ‘wordpress‘);
define(‘DB_HOST‘, ‘mysqldb‘);
define(‘DB_CHARSET‘, ‘utf8‘);
define(‘DB_COLLATE‘, ‘‘);
使用Docker Stack进行编排:
[[email protected] services]# docker stack deploy -c wordpress.yml wordpress
Creating network wordpress_wps_net
Creating service wordpress_mysqldb
Creating service wordpress_php
Creating service wordpress_nginx
查看stack堆栈中的服务:
[[email protected] services]# docker stack ls
NAME SERVICES
wordpress 3
[[email protected] services]# docker stack services wordpress
ID NAME MODE REPLICAS IMAGE
4t1tzw3p41sv wordpress_nginx replicated 3/3 wordpress_nginx:latest
ethi5exgee4b wordpress_php replicated 12/12 wordpress_php-cgi:latest
kuqf94pda9xk wordpress_mysqldb replicated 1/1 mysql:5.6
查看stack服务的运行的任务:
[[email protected] services]# docker stack ps wordpress -f ‘desired-state=running‘
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
3oqomt8pc7dp wordpress_mysqldb.1 mysql:5.6 docker01.contoso.com Running Running 2 hours ago
oea1ff7vvygo wordpress_nginx.1 wordpress_nginx:latest docker01.contoso.com Running Running 2 hours ago
66t9uy42fjdy wordpress_php.1 wordpress_php-cgi:latest docker01.contoso.com Running Running 2 hours ago
xesem7154mv9 wordpress_php.2 wordpress_php-cgi:latest docker01.contoso.com Running Running 2 hours ago
tvp4v2y3fg0c wordpress_nginx.2 wordpress_nginx:latest docker01.contoso.com Running Running 2 hours ago
e5b6i6y623lb wordpress_nginx.3 wordpress_nginx:latest docker01.contoso.com Running Running 2 hours ago
lek976adweuj wordpress_php.3 wordpress_php-cgi:latest docker01.contoso.com Running Running 2 hours ago
wivi95v5hh9r wordpress_php.4 wordpress_php-cgi:latest docker01.contoso.com Running Running 2 hours ago
ymhschv3sdzf wordpress_php.5 wordpress_php-cgi:latest docker01.contoso.com Running Running 2 hours ago
eqerngorx9lo wordpress_php.6 wordpress_php-cgi:latest docker01.contoso.com Running Running 2 hours ago
zkz42pgv2h4u wordpress_php.7 wordpress_php-cgi:latest docker01.contoso.com Running Running 2 hours ago
yszw5wa3xgn1 wordpress_php.8 wordpress_php-cgi:latest docker01.contoso.com Running Running 2 hours ago
xxrwlatxf6wj wordpress_php.9 wordpress_php-cgi:latest docker01.contoso.com Running Running 2 hours ago
ugt3kd21usaj wordpress_php.10 wordpress_php-cgi:latest docker01.contoso.com Running Running 2 hours ago
wo92batyp19a wordpress_php.11 wordpress_php-cgi:latest docker01.contoso.com Running Running 2 hours ago
ifxhg0kpjxpz wordpress_php.12 wordpress_php-cgi:latest docker01.contoso.com Running Running 2 hours ago
在Docker Swarm宿主机所在的局域网中找另外一台主机,部署nginx并配置反向代理:
[[email protected] ~]# ifconfig |grep -A1 eth0
eth0 Link encap:Ethernet HWaddr 00:0C:29:F3:43:86
inet addr:192.168.49.101 Bcast:192.168.49.255 Mask:255.255.255.0
[[email protected] ~]# cat /usr/local/nginx/conf.d/wordpress.conf
upstream wordpress {
server 192.168.49.41:8080 weight=10;
server 192.168.49.42:8080 weight=10;
server 192.168.49.43:8080 weight=10;
}
server {
listen 80 default;
server_name localhost;
location / {
proxy_pass http://wordpress;
proxy_set_header Host $host;
proxy_set_header X-Real-Ip $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
准备完毕后,尝试访问nginx反向代理的IP地址:
已经成功看到wordpress的安装界面,只需简单配置一下就可以完成wordpress的安装,就不再演示,至此通过Docker Stack部署wordpress完成。
原文地址:https://blog.51cto.com/jerry12356/2388683