一.Bonding的模式
1.balance-rr:轮询模式
2.active-backup:热备模式
3.broadcast:广播模式
二.配置步骤(命令配置)
1. 创建主接口
nmcli con add type bond con-name bond0 ifname bond0 mode active-backup
2. 给主接口分配ip地址
nmcli con mod bond0 ipv4.addresses ‘192.168.0.100/24‘
nmcli con mod bond0 ipv4.method manual
3. 创建从接口
nmcli con add type bond-slave ifname eno1 master bond0
nmcli con add type bond-slave ifname eno2 master bond0
4. 开启主接口和从接口
nmcli con up bond-slave-eno2
nmcli con up bond-slave-eno1
nmcli con up bond0
配置步骤(文件配置)
vi /etc/sysconfig/network-scripts/ifcfg-master
DEVICE=bond0
TYPE=Bond
BONDING_MASTER=yes
ONBOOT=yes
USERCTL=no
BONDING_OPTS="mode=balance-rr miimon=50"
BOOTPROTO=none
IPADDR0=10.1.1.250
PREFIX0=24
vi /etc/sysconfig/network-scripts/ifcfg-slave
DEVICE=<name>
TYPE=Ethernet
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
三.查看结果
cat /proc/net/bonding/bond0
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 50
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Speed 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:10:18:2b:98:85
Slave queue ID: 0
Slave Interface: eth1
MII Status: up
Speed 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 64:31:50:18:80:8f
Slave queue ID: 0
一.Teaming配置过程
#ip link(查看网卡设备)
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 52:54:00:00:XX:0b brd ff:ff:ff:ff:ff:ff
4: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 00:10:18:2b:98:85 brd ff:ff:ff:ff:ff:ff
6: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 64:31:50:18:80:8f brd ff:ff:ff:ff:ff:ff
#nmcli con add type team con-name team0 ifname team0 config ‘{"runner": {"name": "activebackup"}}‘(1)
#nmcli con mod team0 ipv4.addresses ‘192.168.0.100/24‘(2)
#nmcli con mod team0 ipv4.method manual(3)
#nmcli con add type team-slave con-name team0-port1 ifname eno1 master team0(4)红色可以不加
#nmcli con add type team-slave con-name team0-port2 ifname eno2 master team0(5)红色可以不加
#teamdctl team0 state(查看状态)
setup:
runner: activebackup
ports:
eno1
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
eno2
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
runner:
active port: eno1
#ping -I team0 192.168.0.254(测试)
PING 192.168.0.254 (192.168.0.254) from 192.168.0.100 team0: 56(84) bytes of data.
64 bytes from 192.168.0.254: icmp_seq=10 ttl=64 time=1.08 ms
64 bytes from 192.168.0.254: icmp_seq=11 ttl=64 time=0.789 ms
64 bytes from 192.168.0.254: icmp_seq=12 ttl=64 time=0.906 ms
...Output omitted...
#nmcli dev dis eno1(关掉一个从设备)
#teamdctl team0 state
setup:
runner: activebackup
ports:
eno2
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
runner:
active port: eno2
#nmcli con up team0-port1
#nmcli dev dis eno2
#teamdctl team0 state
setup:
runner: activebackup
ports:
eno1
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
runner:
active port: eno1
# nmcli dev dis eno2
# teamdctl team0 state
setup:
runner: activebackup
ports:
eno1
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
runner:
active port: eno1
#nmcli con up team0-port2
# teamdctl team0 state
setup:
runner: activebackup
ports:
eno1
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
eno2
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
runner:
active port: eno1