Docker-04-docker单机网络详解

一、docker的网络模式概述

1.1 单机模式

  • bridge

默认模式,此模式会为每一个容器分配、设置IP等,并将容器连接到一个docker0虚拟网桥,通过docker0网桥以及Iptables nat表配置与宿主机通信。

  • host

容器将不会虚拟出自己的网卡,配置自己的IP等,而是使用宿主机的IP和端口。

  • none

该模式关闭了容器的网络功能。

1.2 多机模式

  • overlay

2、Linux网络命名空间

2.1 查看namespace之间的隔离性

  • 第一步:运行一个busybox
docker run -d --name test1 busybox /bin/sh -c "while true;do sleep 3600;done"
  • 第二步:进入生成的容器
[root@docker01 ~]# docker exec -it test1 /bin/sh
/ # 
  • 第三步:通过`ip a`命名查看容器和宿主机的网络命名空间,可以发现容器的网络namespace和主机是隔离的
#查看容器的网络namespace/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
#查看宿主机的网络namespace
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 92:c6:d2:2d:40:98 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.38/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::90c6:d2ff:fe2d:4098/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:cc:a5:3e:94 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:ccff:fea5:3e94/64 scope link
       valid_lft forever preferred_lft forever
15: veth9b0a4fb@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 1a:3d:79:de:cd:28 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::183d:79ff:fede:cd28/64 scope link
       valid_lft forever preferred_lft forever
  • 第四步:再生成另外一个容器,并查看两个容器的网络namespace。通过在两个容器中互ping,发现两个容器的网络namespace是可以互相连通的。
[root@docker01 ~]# docker run -d --name test2 busybox /bin/sh -c "while true;do sleep 3600;done"
e8a75ef22834a402a3a0f71460f8551a658a7bacd364937de9358d35abe960cb
[root@docker01 ~]# sudo docker exec test1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
[root@docker01 ~]# sudo docker exec test2 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

2.2 Linux网络namepace原理实验

实验:在两个namespace上面创建veth接口,并使两个namespace相通!

在做实验之间,先来想象一下两台电脑如果需要连通所需的条件:

  1. 两台电脑都需要一个网口
  2. 两个网口之间需要连接一根网线
  3. 两台电脑需要配置一个同网段的地址

满足上述三个要求,那么两个主机就能实现网络互联。Linux网络namespace采用veth pair技术,过程基本一致,下面就来做下实验。

2.2.1 网络namspace的基本用法

实验之前先来看看基本的命令和用法:

  • 查看linxu主机的network namespace
[root@docker01 ~]# ip netns list
  • 创建network namespace
[root@docker01 ~]# ip netns add test1
[root@docker01 ~]# ip netns add test2
[root@docker01 ~]# ip netns list
test2
test1
  • 查看network namespace里面的ip地址
#可以看到一个新的network namespace里面只有一个回环口,并没有分配ip地址;并且状态为DOWN
[root@docker01 ~]# ip netns exec test1 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  • 启动namespace里面的网卡,例如lo
[root@docker01 ~]# ip netns exec test1 ip link set dev lo up
[root@docker01 ~]# ip netns exec test1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

2.2.2 实验过程

  • 第一步:在主机上添加一对link  veth-test1和veth-test2,创建完成后状态为DOWN。(这一步联想做两个网卡,并用网线将两个网卡互连)
[root@docker01 ~]# ip link add veth-test1 type veth peer name veth-test2
[root@docker01 ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000
    link/ether 92:c6:d2:2d:40:98 brd ff:ff:ff:ff:ff:ff
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT
    link/ether 02:42:cc:a5:3e:94 brd ff:ff:ff:ff:ff:ff
15: veth9b0a4fb@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT
    link/ether 1a:3d:79:de:cd:28 brd ff:ff:ff:ff:ff:ff link-netnsid 0
17: veth8abdbae@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT
    link/ether ae:b3:fa:8f:90:b5 brd ff:ff:ff:ff:ff:ff link-netnsid 1
18: veth-test2@veth-test1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 36:ad:7c:f7:32:e7 brd ff:ff:ff:ff:ff:ff
19: veth-test1@veth-test2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether fa:27:f5:29:95:0b brd ff:ff:ff:ff:ff:ff
  • 第二步:为network namespace test1添加link veth-test1(这一步骤连想将一端网卡差到第一台电脑)

添加前test1里面只有一个lo口,并且为DOWN状态

#将test1添加到veth-test1里面
[root@docker01 ~]# ip link set veth-test1 netns test1

#查看test1的ip地址
root@docker01 ~]# ip netns exec test1 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noqueue state DOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
19: veth-test1@if18: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether fa:27:f5:29:95:0b brd ff:ff:ff:ff:ff:ff link-netnsid 0

#并且发现本地的19口已经没有了(添加到了test1里面)
[root@docker01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 92:c6:d2:2d:40:98 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.38/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::90c6:d2ff:fe2d:4098/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:cc:a5:3e:94 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:ccff:fea5:3e94/64 scope link
       valid_lft forever preferred_lft forever
15: veth9b0a4fb@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 1a:3d:79:de:cd:28 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::183d:79ff:fede:cd28/64 scope link
       valid_lft forever preferred_lft forever
17: veth8abdbae@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether ae:b3:fa:8f:90:b5 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::acb3:faff:fe8f:90b5/64 scope link
       valid_lft forever preferred_lft forever
18: veth-test2@if19: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 36:ad:7c:f7:32:e7 brd ff:ff:ff:ff:ff:ff link-netnsid 2
  • 第三步:将network namespace test2添加link veth-test2(这一步骤连想将一端网卡差到第二台电脑)

添加前test2里面只有一个lo口,并且为DOWN状态

#将veth-test2添加到test2里[root@docker01 ~]# ip link set veth-test2 netns test2

#查看添加之后test2包含的ip
[root@docker01 ~]# ip netns exec test2 ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
18: veth-test2@if19: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 36:ad:7c:f7:32:e7 brd ff:ff:ff:ff:ff:ff link-netnsid 0

#再次查看主机ip,发现本地的18口也没有了
[root@docker01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 92:c6:d2:2d:40:98 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.38/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::90c6:d2ff:fe2d:4098/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:cc:a5:3e:94 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:ccff:fea5:3e94/64 scope link
       valid_lft forever preferred_lft forever
15: veth9b0a4fb@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 1a:3d:79:de:cd:28 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::183d:79ff:fede:cd28/64 scope link
       valid_lft forever preferred_lft forever
17: veth8abdbae@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether ae:b3:fa:8f:90:b5 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::acb3:faff:fe8f:90b5/64 scope link
       valid_lft forever preferred_lft forever
  • 第四步:分别给test1和test2分配IP地址
[root@docker01 ~]# ip netns exec test1 ip addr add 10.0.0.1/24 dev veth-test1
[root@docker01 ~]# ip netns exec test2 ip addr add 10.0.0.2/24 dev veth-test2
  • 第五步:分别启动两个namespace的veth网卡
[root@docker01 ~]# ip netns exec test1 ip link set dev veth-test1 up
[root@docker01 ~]# ip netns exec test2 ip link set dev veth-test2 up
  • 第六步:查看两个namespace的ip地址
[root@docker01 ~]# ip netns exec test1 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noqueue state DOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
19: veth-test1@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether fa:27:f5:29:95:0b brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 10.0.0.1/24 scope global veth-test1
       valid_lft forever preferred_lft forever
    inet6 fe80::f827:f5ff:fe29:950b/64 scope link
       valid_lft forever preferred_lft forever
[root@docker01 ~]# ip netns exec test2 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
18: veth-test2@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 36:ad:7c:f7:32:e7 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.0.1/24 scope global veth-test2
       valid_lft forever preferred_lft forever
    inet6 fe80::34ad:7cff:fef7:32e7/64 scope link
       valid_lft forever preferred_lft forever
  • 两个NameSpace互ping,能ping通则说明实验成功!!
[root@docker01 ~]# ip netns exec test1 ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.109 ms
^C
--- 10.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.109/0.115/0.121/0.006 ms
[root@docker01 ~]# ip netns exec test2 ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.106 ms
^C
--- 10.0.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.079/0.092/0.106/0.016 ms

Linux的网络namespace原理基本如上,docker容器的网络也类似于此!!

三、Docker Bridge网络

3.1 docker network命令

  • 查看docker主机的网络模式
[root@docker01 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
afba28b63ba9        bridge              bridge              local
d12ebb4b73d8        host                host                local
c2fb11041077        none                null                local
  • 查看network的详细信息

在输出结果中可以看到有一项为container,并且有test1这个容器的信息,说明test1的网络加入到了这个bridge

[root@docker01 ~]# docker network inspect afba28b63ba9
[
    {
        "Name": "bridge",
        "Id": "afba28b63ba9369a0068985cc75f93c309c3e802bee19ad616d2961f45df310a",
        "Created": "2019-03-30T10:45:07.636737489+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "8e8f15985b86f0d0811332bd5d963342e8d1f3e9a060de55237db9a3e084b0a6": {
                "Name": "test1",
                "EndpointID": "47e99078d905b7c11fe0b4cd6279470746f4c0d85fcec8a6e40d536987163ddb",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

3.2 容器网络通信原理

3.2.1 容器之间互相访问原理

容器首先都连接到docekr0设备,然后实现网络的互通!

3.2.2 实验过程

首先,我们通过ip a命令来看宿主机的网络namespace,除了lo网卡和eth0网卡外,还有另外两个网卡!

  • docker0:本机桥接网络的一个network namespace,即上图的docker0设备
  • veth9b0a4fb@if14:负责连到docker0上面的veth pair的一个端口,另一端连接到容器上!
[root@docker01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 92:c6:d2:2d:40:98 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.38/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::90c6:d2ff:fe2d:4098/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:cc:a5:3e:94 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:ccff:fea5:3e94/64 scope link
       valid_lft forever preferred_lft forever
15: veth9b0a4fb@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 1a:3d:79:de:cd:28 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::183d:79ff:fede:cd28/64 scope link
       valid_lft forever preferred_lft forever
  • 第二步:查看test1容器的网络

发现有一个eth0@if15的网卡设备,这个设备和宿主机的veth9b0a4fb@if14是一对!

[root@docker01 ~]# docker exec test1 ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
  • 第三步:验证veth9b0a4fb是连接到docker0上面的:
#安装bridge-utils工具[root@docker01 ~]# yum -y install bridge-utils

#使用brctl show命令来验证interface
[root@docker01 ~]# brctl show
bridge name    bridge id        STP enabled    interfaces
docker0        8000.0242cca53e94    no        veth9b0a4fb

此外,还可以再添加一个容器来验证此结果! 

3.2.2 容器访问外网原理

首先,容器通过veth pari连接到docker0,然后docker0通过NAT技术实现外网的访问!

NAT是通过防火墙实现!!

3.3 管理bridge网络

  • 通过network create创建一个bridge网络
#通过docker network命令创建一个bridge网络[root@docker01 ~]# docker network create -d bridge my-bridge
#查看本机网络详情
[root@docker01 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
afba28b63ba9        bridge              bridge              local
d12ebb4b73d8        host                host                local
b18aced3f3e8        my-bridge           bridge              local
c2fb11041077        none                null                local

#查看本地的网卡依赖关系
[root@docker01 ~]# brctl show
bridge name    bridge id        STP enabled    interfaces
br-b18aced3f3e8        8000.024297dc79ce    no
docker0        8000.0242cca53e94    no        veth9b0a4fb
  • 在创建network时,指定网关等详细信息
#创建一个指定网段的网络
docker network create --driver bridge --subnet 172.22.16.0/24 --gateway 172.22.16.1 my-test
  • 删除一个存在的网络
#删除一个自定义的网络,删除前应该先移除该网络上的所有容器
docker network rm my-test
  • 创建容器时,通过--network参数将容器加入到新建的网络
[root@docker01 ~]# docker run -d --name test3 --network my-bridge busybox /bin/sh -c "while true;do sleep 3600;done"
d0d8d5b85eedde27d771976693fccf099010a64d9e28d630059bde3764549f82
  • 已存在容器通过connect连接到新建网络
[root@docker01 ~]# docker network connect my-bridge test1
[root@docker01 ~]# brctl show
bridge name    bridge id        STP enabled    interfaces
br-b18aced3f3e8        8000.024297dc79ce    no        veth11267b8
                            veth2cbc898
docker0        8000.0242cca53e94    no        veth9b0a4fb
  • 将容器从网络里移除
[root@docker01 ~]# docker network disconnect my-bridge test1

注:1、连接到docker0上面的容器无法互相ping通主机名;

2、连接到自定义的网络里面的容器可以ping通主机名,因为连接的时候默认会做--link的操作

三、None和Host网络

3.1 Docker None网络

加入到None网络的容器的网络namespace是孤立的,只能通过exec连接容器,一般不会使用。

验证过程:

  • 第一步:创建一个容器,并将其加入到None网络中
[root@docker01 ~]# docker run -d --name test4 --network none busybox /bin/sh -c "while true;do sleep 3600;done"
cc801ac03ff8ccf81ed6c168dda6a39a217066c37356792c0f5ddd1334852b40
  • 第二步:查看None网络的详细信息,发现新建的容器并没有ip地址和mac地址等内容
[root@docker01 ~]# docker network inspect none
[
    {
        "Name": "none",
        "Id": "c2fb11041077b5d87b48aa97b90983beadb78b7931e121ff4af592cc4bf737f9",
        "Created": "2019-03-30T10:45:07.569241552+08:00",
        "Scope": "local",
        "Driver": "null",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "cc801ac03ff8ccf81ed6c168dda6a39a217066c37356792c0f5ddd1334852b40": {
                "Name": "test4",
                "EndpointID": "0c4ba07e746316b4b633b139b3ec1cf42a92297776214d25b2609d331e1356ad",
                "MacAddress": "",
                "IPv4Address": "",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]
  • 第三步:进入新建的容器里,发现也没有任何的网络信息
[root@docker01 ~]# docker exec -it test4 /bin/sh
/ # ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
/ # 

3.2 Docker Host网络

在host网络模式下,docker容器没有独立的网络namespace,而是和宿主机共用一套namespace,此种网络模式也很少用。

验证过程:

  • 第一步:创建一个容器,并指定其网络模式为Host
[root@docker01 ~]# docker run -d --name test5 --network host busybox /bin/sh -c "while true;do sleep 3600;done"
62bba657c8ac97a27314c3a492fcf2e1d4faff6c3adec331ebf37ebbf25509c1
  • 第二步:查看主机网络信息
[root@docker01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 92:c6:d2:2d:40:98 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.38/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::90c6:d2ff:fe2d:4098/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 02:42:cc:a5:3e:94 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:ccff:fea5:3e94/64 scope link
       valid_lft forever preferred_lft forever
  • 第三步:进入容器,查看容器网络信息。(可以发现两者完全一致)
[root@docker01 ~]# docker exec -it test5 /bin/sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq qlen 1000
    link/ether 92:c6:d2:2d:40:98 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.38/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::90c6:d2ff:fe2d:4098/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
    link/ether 02:42:cc:a5:3e:94 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:ccff:fea5:3e94/64 scope link
       valid_lft forever preferred_lft forever


单机容器网路介绍到此结束!

原文地址:https://www.cnblogs.com/liuguangjiji/p/10627382.html

时间: 2024-08-04 17:50:33

Docker-04-docker单机网络详解的相关文章

Docker网络详解

Docker网络详解 转载请注明来自:http://blog.csdn.net/wsscy2004 网络基础 Docker使用linux桥接,在主机虚拟一个docker0网络接口,在主机中运行命令查看: # List host bridges $ sudo brctl showbridge name bridge id STP enabled interfacesdocker0 8000.000000000000 no # Show docker0 IP address $ sudo ifcon

Docker Kubernetes 服务发现原理详解

Docker Kubernetes  服务发现原理详解 服务发现支持Service环境变量和DNS两种模式: 一.环境变量 (默认) 当一个Pod运行到Node,kubelet会为每个容器添加一组环境变量,Pod容器中程序就可以使用这些环境变量发现Service. 环境变量名格式如下: {SVCNAME}_SERVICE_HOST {SVCNAME}_SERVICE_PORT 注:其中服务名和端口名转为大写,连字符转换为下划线. 限制: 1)Pod和Service的创建顺序是有要求的,Servi

Windows XP硬盘安装Ubuntu 16.04双系统图文详解

需要下载的东西有两个,一个是grub4dos,另一个是Ubuntu 16.04 LTS的镜像文件,具体下载地址如下: 1 2 3       1.grub4dos  点击下载 grub4dos 2.Ubuntu 16.04 点击下载 Ubuntu 16.04.iso 准备工作 1.解压grub4dos压缩包,会得到一个名为grub4dos-0.4.4的文件夹,将以下文件拷贝到C盘(其中前两个文件是必需的,后两个文件网上有些资料说不需要,为了保险起见还是放上吧,反正也没什么坏处-): 1   gr

Java网络详解

Java网络详解 Java网络基本概念 网络基础知识 1.计算机网络形式多样,内容繁杂.网络上的计算机要互相通信,必须遵循一定的协议.目前使用最广泛的网络协议是Internet上所使用的TCP/IP协议 2.网络编程的目的就是指直接或间接地通过网络协议与其他计算机进行通讯.网络编程中有两个主要的问题,一个是如何准确的定位网络上一台或多台主机,另一个就是找到主机后如何可靠高效的进行数据传输.在TCP/IP协议中IP层主要负责网络主机的定位,数据传输的路由,由IP地址可以唯一地确定Internet上

Ubuntu 16.04下安装MySQL详解

Ubuntu 16.04下安装MySQL详解分别依次输入以下3个命令: sudo apt-get install mysql-server sudo apt install mysql-client sudo apt install libmysqlclient-dev 安装成功后可以通过下面的命令测试是否安装成功: sudo netstat -tap | grep mysql 出现如下信息证明安装成功: >>> sudo netstat -tap | grep mysql tcp 0

Docker网络详解及pipework源码解读与实践

Docker作为目前最火的轻量级容器技术,有很多令人称道的功能,如Docker的镜像管理.然而,Docker同样有着很多不完善的地方,网络方面就是Docker比较薄弱的部分.因此,我们有必要深入了解Docker的网络知识,以满足更高的网络需求.本文首先介绍了Docker自身的4种网络工作方式,然后通过3个样例 -- 将Docker容器配置到本地网络环境中.单主机Docker容器的VLAN划分.多主机Docker容器的VLAN划分,演示了如何使用pipework帮助我们进行复杂的网络设置,以及pi

Docker(三):Dockerfile 命令详解

上一篇文章Docker(二):Dockerfile 使用介绍介绍了 Dockerfile 的使用,这篇文章我们来继续了解 Dockerfile ,学习 Dockerfile 各种命令的使用. Dockerfile 指令详解 1 FROM 指定基础镜像 FROM 指令用于指定其后构建新镜像所使用的基础镜像.FROM 指令必是 Dockerfile 文件中的首条命令,启动构建流程后,Docker 将会基于该镜像构建新镜像,FROM 后的命令也会基于这个基础镜像. FROM语法格式为: FROM <i

Docker的安装基本命令配置详解

Docker 官网:https://docs.docker.com      Docker值得关注的特性:         文件系统隔离:每个进程容器运行在一个完全独立的根文件系统里.         资源隔离:系统资源,像CPU和内存等可以分配到不同的容器中,使用Cgroup.         网络隔离:每个进程容器运行在自己的网络空间,虚拟接口和IP地址.             日志记录:Docker将会收集和记录每个进程容器的标准流(stdout/stderr/stdin),用于实时检索

Docker的学习--命令使用详解

使用命令查看一下docker都有那些命令: docker -h 你将得到如下结果: A self-sufficient runtime for linux containers. Options: --api-cors-header= Set CORS headers in the remote API -b, --bridge= Attach containers to a network bridge --bip= Specify network bridge IP -D, --debug=