docker (centOS 7) 使用笔记5 - weave网络

weave官网 https://www.weave.works

1. 下载安装

sudo curl -L git.io/weave -o /usr/local/bin/weave
sudo chmod a+x /usr/local/bin/weave

2. 部署weave网络

(1) 在第一台机器上运行,如果使用默认的 10.0.*.* 网段则如下

weave launch

本次测试使用自定义的网段,所以启动指令有所不同:

weave launch --ipalloc-range 168.108.0.0/16

启动成功后,会有3个weave的容器运行中

# docker ps -a
c9ed14e97dfd        weaveworks/weave:2.0.4                 "/home/weave/weave..."   2 days ago          Up 2 days                                   weave
7db070b5f54e        weaveworks/weaveexec:2.0.4             "/bin/false"             2 days ago          Created                                     weavevolumes-2.0.4
b6d603c8c7a8        weaveworks/weavedb                     "data-only"              2 days ago          Created                                     weavedb

可看到增加了虚拟网卡 weave

# ifconfig
datapath: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1376
        ether a6:66:9d:b6:5f:66  txqueuelen 1000  (Ethernet)
        RX packets 3  bytes 84 (84.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.0.1  netmask 255.255.240.0  broadcast 0.0.0.0
        ether 02:42:97:9e:30:4b  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker_gwbridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.16.1  netmask 255.255.240.0  broadcast 0.0.0.0
        ether 02:42:b9:64:2f:b8  txqueuelen 0  (Ethernet)
        RX packets 366610  bytes 29530131 (28.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 366610  bytes 29530131 (28.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.28.148.61  netmask 255.255.252.0  broadcast 10.28.151.255
        ether 00:16:3e:0e:80:7a  txqueuelen 1000  (Ethernet)
        RX packets 127115170  bytes 12384433822 (11.5 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 78033899  bytes 8572122284 (7.9 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 101.37.162.152  netmask 255.255.252.0  broadcast 101.37.163.255
        ether 00:16:3e:0e:86:ce  txqueuelen 1000  (Ethernet)
        RX packets 3995610  bytes 538305947 (513.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4881735  bytes 4715682947 (4.3 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1  (Local Loopback)
        RX packets 366610  bytes 29530131 (28.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 366610  bytes 29530131 (28.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth7720327: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 7e:69:f5:02:6d:9e  txqueuelen 0  (Ethernet)
        RX packets 6  bytes 372 (372.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 9  bytes 798 (798.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

weave: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1376
        ether c2:a6:97:90:01:0a  txqueuelen 1000  (Ethernet)
        RX packets 3  bytes 84 (84.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

(2) 其他的节点加入加入上面已经创建的weave网络

weave launch 10.28.148.61 --ipalloc-range 168.108.0.0/16

(3) 创建网络成功的话,在每个节点上都可以用docker命令查看到weave网络

# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
7c19813ffbff        bridge              bridge              local
a7a2188380ba        docker_gwbridge     bridge              local
7f97ac1cfe6e        host                host                local
z08xcdlswkbk        ingress             overlay             swarm
dfa68b3918b3        none                null                local
42f695c8c061        weave               weavemesh           local

3. docker启动测试

(1) 启动相当简单,仅需正常的docker命令中指定network为weave就行了

docker run -ti --network weave mytest

(2) 在2个节点上启动容器

在容器内部ifconfig可以看到容器使用的是weave的子网段,2个节点分别是168.108.0.1和168.108.192.0

[[email protected] /]# ifconfig
ethwe0    Link encap:Ethernet  HWaddr 42:6E:BF:E4:72:A7
          inet addr:168.108.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1376  Metric:1
          RX packets:1 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:42 (42.0 b)  TX bytes:42 (42.0 b)

eth0      Link encap:Ethernet  HWaddr 02:42:C0:A8:10:06
          inet addr:192.168.16.6  Bcast:0.0.0.0  Mask:255.255.240.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
[[email protected] /]# ifconfig
ethwe0    Link encap:Ethernet  HWaddr F6:8D:A2:CB:EF:F5
          inet addr:168.108.192.0  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1376  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:42 (42.0 b)

eth0      Link encap:Ethernet  HWaddr 02:42:C0:A8:10:03
          inet addr:192.168.16.3  Bcast:0.0.0.0  Mask:255.255.240.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

在容器里可以互相ping通

[[email protected] /]# ping 168.108.192.0
PING 168.108.192.0 (168.108.192.0) 56(84) bytes of data.
64 bytes from 168.108.192.0: icmp_seq=1 ttl=64 time=0.935 ms
64 bytes from 168.108.192.0: icmp_seq=2 ttl=64 time=0.334 ms
64 bytes from 168.108.192.0: icmp_seq=3 ttl=64 time=0.257 ms
64 bytes from 168.108.192.0: icmp_seq=4 ttl=64 time=0.386 ms
^C
--- 168.108.192.0 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3845ms
rtt min/avg/max/mdev = 0.257/0.478/0.935/0.267 ms
[[email protected] /]# ping 168.108.0.1
PING 168.108.0.1 (168.108.0.1) 56(84) bytes of data.
64 bytes from 168.108.0.1: icmp_seq=1 ttl=64 time=0.428 ms
64 bytes from 168.108.0.1: icmp_seq=2 ttl=64 time=0.274 ms
64 bytes from 168.108.0.1: icmp_seq=3 ttl=64 time=0.344 ms
64 bytes from 168.108.0.1: icmp_seq=4 ttl=64 time=0.341 ms
^C
--- 168.108.0.1 ping statistics ---
9 packets transmitted, 9 received, 0% packet loss, time 8592ms
rtt min/avg/max/mdev = 0.235/0.301/0.428/0.056 ms

(3) 网速测试:

本次测试的环境是阿里云上的ECS,内网带宽为 1Gbits。

先安装iperf3(网速测试工具)

curl "http://downloads.es.net/pub/iperf/iperf-3.0.6.tar.gz" -o iperf-3.0.6.tar.gz
tar xzvf iperf-3.0.6.tar.gz
cd iperf-3.0.6
./configure
make
make install

在节点2上启动iperf服务

# iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

在节点1上启动网速测试

# iperf3 -c 168.108.192.0
Connecting to host 168.108.192.0, port 5201
[  4] local 168.108.0.1 port 50208 connected to 168.108.192.0 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   170 MBytes  1.42 Gbits/sec  1443    344 KBytes
[  4]   1.00-2.00   sec  95.2 MBytes   799 Mbits/sec  3432    835 KBytes
[  4]   2.00-3.00   sec  95.0 MBytes   797 Mbits/sec  3934    397 KBytes
[  4]   3.00-4.00   sec  96.2 MBytes   807 Mbits/sec  3306    684 KBytes
[  4]   4.00-5.00   sec  93.8 MBytes   786 Mbits/sec  4532    818 KBytes
[  4]   5.00-6.00   sec  95.0 MBytes   797 Mbits/sec  4308    617 KBytes
[  4]   6.00-7.00   sec  95.0 MBytes   797 Mbits/sec  4610    326 KBytes
[  4]   7.00-8.00   sec  95.0 MBytes   797 Mbits/sec  2607    887 KBytes
[  4]   8.00-9.00   sec  93.8 MBytes   786 Mbits/sec  4161    905 KBytes
[  4]   9.00-10.00  sec  95.0 MBytes   797 Mbits/sec  4314    666 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1024 MBytes   859 Mbits/sec  36647             sender
[  4]   0.00-10.00  sec  1021 MBytes   856 Mbits/sec                  receiver

iperf Done.

测试下来平均网速:发送速度 859 Mbits/sec ,收取速度 856 Mbits/sec。网速还是让人比较满意的。

时间: 2024-12-22 19:12:29

docker (centOS 7) 使用笔记5 - weave网络的相关文章

docker (centOS 7) 使用笔记5 - skydns

skydns被用于kubenets作为DNS服务.本次测试是单独使用skydns作为DNS服务器,且作为loadbalance使用. 前提:需要先安装配置etcd服务 (在前面的文章里,已经安装部署了etcd服务) 1. 下载安装 go get github.com/skynetservices/skydns cd ~/go/src/github.com/skynetservices/skydns go build -v 2. 启动 #etcd的client URL export ETCD_MA

如何使用 Weave 网络?- 每天5分钟玩转 Docker 容器技术(63)

weave 是 Weaveworks 开发的容器网络解决方案.weave 创建的虚拟网络可以将部署在多个主机上的容器连接起来.对容器来说,weave 就像一个巨大的以太网交换机,所有容器都被接入这个交换机,容器可以直接通信,无需 NAT 和端口映射.除此之外,weave 的 DNS 模块使容器可以通过 hostname 访问. 实验环境描述 weave 不依赖分布式数据库(例如 etcd 和 consul)交换网络信息,每个主机上只需运行 weave 组件就能建立起跨主机容器网络.我们会在 ho

Weave 网络结构分析 - 每天5分钟玩转 Docker 容器技术(64)

上一节我们安装并创建了 Weave 网络,本节将部署容器并分析网络结构. 在 host1 中运行容器 bbox1: eval $(weave env) docker run --name bbox1 -itd busybox 首先执行 eval $(weave env) 很重要,其作用是将后续的 docker 命令发给 weave proxy 处理.如果要恢复之前的环境,可执行 eval $(weave env --restore). 查看一下当前容器 bbox1 的网络配置: bbox1 有两

Docker CentOS release 7.2下的安装

Docker CentOS release 7.2下的安装 环境介绍: [[email protected] ~]# cat /etc/redhat-release  CentOS Linux release 7.2.1511 (Core)  [[email protected] ~]# uname -r 3.10.0-327.el7.x86_64 [[email protected] ~]# Docker的安装:如果网络条件良好的情况下,直接配置yum源,yum安装,也可以提前下载好rpm包进

Docker Centos安装Mysql5.6

之前一篇随笔<Docker Centos安装Openssh> 写的是如何在基础的centos镜像中搭建ssh服务,在此基础上再搭建其他服务.本文继续介绍在centos_ssh基础上搭建mysql服务. 1.启动centos_sshd镜像 # docker run --net=host -d registry:5000/centos-sshd-222:v1.0 /run.sh 这里用的是host模式连接的网络,启动之后即可通过ssh登录到容器内部,装上mysql之后可以直接重启容器来验证是否成功

AspNetCoreapi 使用 Docker + Centos 7部署

好久没有更新文章了,前段时间写了一系列的文章放到桌面了,想着修修改改,后来系统中勒索病毒了还被公司网络安全的抓到是我电脑,后来装系统文章给装丢了.然后好长一段时间没有写了. 今天记录一下AspNetCore 部署Docker+Centos 7 这里说明一下:Docker 需要用Centos7版本的操作系统 这里模拟演示的是实战环境的部署,项目就是使用的实战中的项目 1:安装前准备 首先确保我们有一台Linux 服务器或者系统或者虚拟机,我使用的是虚拟机系统做的Centos 7的版本.然后确保网络

61-如何使用 Weave 网络?

weave 是 Weaveworks 开发的容器网络解决方案.weave 创建的虚拟网络可以将部署在多个主机上的容器连接起来.对容器来说,weave 就像一个巨大的以太网交换机,所有容器都被接入这个交换机,容器可以直接通信,无需 NAT 和端口映射.除此之外,weave 的 DNS 模块使容器可以通过 hostname 访问. 实验环境描述 weave 不依赖分布式数据库(例如 etcd 和 consul)交换网络信息,每个主机上只需运行 weave 组件就能建立起跨主机容器网络.我们会在 ho

笔记之Python网络数据采集

笔记之Python网络数据采集 非原创即采集 一念清净, 烈焰成池, 一念觉醒, 方登彼岸 网络数据采集, 无非就是写一个自动化程序向网络服务器请求数据, 再对数据进行解析, 提取需要的信息 通常, 有api可用, api会比写网络爬虫程序来获取数据更加方便. Part1 创建爬虫 Chapter1 初建网络爬虫 一旦你开始采集网络数据, 就会感受到浏览器为我们所做的所有细节, 它解释了所有的html, css, JavaScript 网络浏览器是一个非常有用的应用, 它创建信息的数据包, 发送

Java基础复习笔记系列 九 网络编程

Java基础复习笔记系列之 网络编程 1. 2.