Multicast on Openstack

I test multicast on openstack. I use external Router in this test.

Openstack Environment:

Havana (ML2 + OVS)

Test Environment:

VM1 is in Vlan 850. VM2,VM3,VM4 are in Vlan 820.

VM2 and VM3 are on the same compute node ci91szcmp003 , VM1 and VM4 are on another compute node ci91szcmp004.

Here is the topology:

The security Group I‘m using is as following, UDP 5001 is the port which I used to test Multicast Packet:

Test Tool:

I use iperf to test Multicast. You can easily install this tool on CentOS with the following command:

yum install iperf

If you do not know how to use this tool, you can see "man iperf" for more detail.

Simulate Multicast Sender with the following command:

iperf -c 224.1.1.1 -u -T 32 -t 3 -i 1

Simulate Multicast Receiver with the following command:

iperf -s -u -B 224.1.1.1 -i 1

Test Case and Result:

VMs in the same Vlan

Case #1:

VM2 send multicast packet. VM3 and VM4 do not join the Multicast Group, use tcpdump to see if they can receive Multicast packets.

Multicast packet flow from VM2 to VM3 is VM2 -> OVS -> VM3, as following:

Multicast packet flow from VM2 to VM4 is VM2 -> OVS -> Phy Switch -> OVS -> VM3, as following:

Result and Log:

On VM2 I send multicast packet:
[[email protected] ~]# iperf -c 224.1.1.1 -u -T 32 -t 3 -i 1
------------------------------------------------------------
Client connecting to 224.1.1.1, UDP port 5001
Sending 1470 byte datagrams
Setting multicast TTL to 32
UDP buffer size:  224 KByte (default)
------------------------------------------------------------
[  3] local 10.224.159.146 port 54457 connected with 224.1.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   129 KBytes  1.06 Mbits/sec
[  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  0.0- 3.0 sec   386 KBytes  1.05 Mbits/sec
[  3] Sent 269 datagrams

On VM3, I did not join the Multicast Group and use tcpdump to dump packet:
[[email protected] ~]# netstat -g -n
IPv6/IPv4 Group Memberships
Interface       RefCnt Group
--------------- ------ ---------------------
lo              1      224.0.0.1
eth0            1      224.0.0.1
[[email protected] ~]# tcpdump -i eth0 host 224.1.1.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
04:45:59.213678 IP 10.224.159.146.54457 > 224.1.1.1.commplex-link: UDP, length 1470
04:45:59.224902 IP 10.224.159.146.54457 > 224.1.1.1.commplex-link: UDP, length 1470
04:45:59.236114 IP 10.224.159.146.54457 > 224.1.1.1.commplex-link: UDP, length 1470
04:45:59.247387 IP 10.224.159.146.54457 > 224.1.1.1.commplex-link: UDP, length 1470
04:45:59.258611 IP 10.224.159.146.54457 > 224.1.1.1.commplex-link: UDP, length 1470
04:45:59.269744 IP 10.224.159.146.54457 > 224.1.1.1.commplex-link: UDP, length 1470
04:45:59.281011 IP 10.224.159.146.54457 > 224.1.1.1.commplex-link: UDP, length 1470
...

On VM4, I did not join the Multicast Group and use tcpdump to dump packet:
[[email protected] ~]# netstat -g -n
IPv6/IPv4 Group Memberships
Interface       RefCnt Group
--------------- ------ ---------------------
lo              1      224.0.0.1
eth0            1      224.0.0.1
[[email protected] ~]# tcpdump -i eth0 host 224.1.1.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes

We can see that VM2 and VM3 are in the same Vlan and on the same compute node. They are connected with Openvswitch.Openvswitch do not support Multicast snooping. So it will broadcast all the Multicast packets. So Even VM3 did not join the multicast
group, it can also receive the Multicast packets.And at the same time, We can see VM4 did not receive the Multicast packet because VM4 is on another Compute node. The two compute nodes are connected to a physical switch. And the physical Switch support igmp
snooping and it will not broadcast the multicast packet to other compute nodes.

Case #2:

VM2 send multicast packet. VM4 join the Multicast Group to see if it can receive Multicast packets.

Result and Log:

On VM2 I send multicast packet:
[[email protected] ~]# iperf -c 224.1.1.1 -u -T 32 -t 3 -i 1
------------------------------------------------------------
Client connecting to 224.1.1.1, UDP port 5001
Sending 1470 byte datagrams
Setting multicast TTL to 32
UDP buffer size:  224 KByte (default)
------------------------------------------------------------
[  3] local 10.224.159.146 port 35844 connected with 224.1.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   129 KBytes  1.06 Mbits/sec
[  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  0.0- 3.0 sec   386 KBytes  1.05 Mbits/sec
[  3] Sent 269 datagrams

On VM4 I receive multicast packet:
[[email protected] ~]# iperf -s -u -B 224.1.1.1 -i 1
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 224.1.1.1
Joining multicast group  224.1.1.1
Receiving 1470 byte datagrams
UDP buffer size:  224 KByte (default)
------------------------------------------------------------
[  3] local 224.1.1.1 port 5001 connected with 10.224.159.146 port 35844
[ ID] Interval       Transfer     Bandwidth        Jitter   Lost/Total Datagrams
[  3]  0.0- 1.0 sec   128 KBytes  1.05 Mbits/sec   0.027 ms    0/   89 (0%)
[  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec   0.026 ms    0/   89 (0%)
[  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec   0.018 ms    0/   89 (0%)
[  3]  0.0- 3.0 sec   386 KBytes  1.05 Mbits/sec   0.019 ms    0/  269 (0%)

[[email protected] ~]# netstat -n -g
IPv6/IPv4 Group Memberships
Interface       RefCnt Group
--------------- ------ ---------------------
lo              1      224.0.0.1
eth0            1      224.1.1.1
eth0            1      224.0.0.1

We can see that VM2 and VM4 are in the same Vlan and on different compute nodes. After VM4 join the Multicast Group, VM2 send the packets and VM4 can receive the multicast packets.

VMs in different Vlan and use external Router

Case #3:

VM1 send Multicast packet. VM2 and VM4 do not join the Multicast Group, tcpdump to see if they can receive Multicast packets.

Multicast packet flow from VM1 to VM2 is VM1 -> OVS -> Phy Switch -> Phy Router -> Phy Switch -> OVS -> VM2, as following:

Multicast packet flow from VM1 to VM4 is VM1 -> OVS -> Phy Switch -> Phy Router -> Phy Switch -> OVS -> VM4, as following:

Result and Log:

[[email protected] ~]# iperf -c 224.1.1.1 -u -T 32 -t 3 -i 1
------------------------------------------------------------
Client connecting to 224.1.1.1, UDP port 5001
Sending 1470 byte datagrams
Setting multicast TTL to 32
UDP buffer size:  224 KByte (default)
------------------------------------------------------------
[  3] local 10.224.148.94 port 60820 connected with 224.1.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   129 KBytes  1.06 Mbits/sec
[  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  0.0- 3.0 sec   386 KBytes  1.05 Mbits/sec
[  3] Sent 269 datagrams

[[email protected] ~]# netstat -n -g
IPv6/IPv4 Group Memberships
Interface       RefCnt Group
--------------- ------ ---------------------
lo              1      224.0.0.1
eth0            1      224.0.0.1
[[email protected] ~]# tcpdump -i eth0 host 224.1.1.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes

[[email protected] ~]# netstat -g -n
IPv6/IPv4 Group Memberships
Interface       RefCnt Group
--------------- ------ ---------------------
lo              1      224.0.0.1
eth0            1      224.0.0.1
[[email protected] ~]# tcpdump -i eth0 host 224.1.1.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes

Case #4:

VM1 send Multicast packet. VM2 join the Multicast Group to see if it can receive Multicast packets.

Result and Log:

[[email protected] ~]# iperf -c 224.1.1.1 -u -T 32 -t 3 -i 1
------------------------------------------------------------
Client connecting to 224.1.1.1, UDP port 5001
Sending 1470 byte datagrams
Setting multicast TTL to 32
UDP buffer size:  224 KByte (default)
------------------------------------------------------------
[  3] local 10.224.148.94 port 41301 connected with 224.1.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   129 KBytes  1.06 Mbits/sec
[  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  0.0- 3.0 sec   386 KBytes  1.05 Mbits/sec
[  3] Sent 269 datagrams

[[email protected] ~]# iperf -s -u -B 224.1.1.1 -i 1
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 224.1.1.1
Joining multicast group  224.1.1.1
Receiving 1470 byte datagrams
UDP buffer size:  224 KByte (default)
------------------------------------------------------------
[  3] local 224.1.1.1 port 5001 connected with 10.224.148.94 port 41301
[ ID] Interval       Transfer     Bandwidth        Jitter   Lost/Total Datagrams
[  3]  0.0- 1.0 sec   128 KBytes  1.05 Mbits/sec   0.029 ms    0/   89 (0%)
[  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec   0.031 ms    0/   89 (0%)
[  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec   0.025 ms    0/   89 (0%)
[  3]  0.0- 3.0 sec   386 KBytes  1.05 Mbits/sec   0.025 ms    0/  269 (0%)

[[email protected] ~]# netstat -g -n
IPv6/IPv4 Group Memberships
Interface       RefCnt Group
--------------- ------ ---------------------
lo              1      224.0.0.1
eth0            1      224.1.1.1
eth0            1      224.0.0.1

Conclusion:

1. Openvswitch do not support IGMP snooping, and if the VM on one compute node send multicast packets, all the VM on the same compute and in the same Vlan will receive the multicast packets. This may have some performance loss
here. As I know, Cisco N1K can support IGMP snooping. If we use N1K can get better performance here.

2. If VMs use External Router, we need config the External Router to support IGMP, and in this situation the Multicast can work well in Openstack. If you use neutron-l3-agent, it will use iptables + namespace to simulate Virtual Router, it does
not support Multicast now.

Multicast on Openstack

时间: 2024-10-28 07:54:24

Multicast on Openstack的相关文章

58 Openstack基础、openstack之glance、openstack之keystone

03 openstack之keystone 配置环境 Controller CentOS release 6.7 controller eth0:仅主机 192.168.28.121 eth1:桥接 192.168.1.121 node2 192.168.1.122 CentOS release 6.7 compute1 eth0:仅主机,eth1:VMnet2 不会直接与外部网络通信 node3 192.168.1.123 CentOS release 6.7 networking eth0:

【N版】openstack——走进云计算(一)

[N版]openstack--走进云计算 一.云计算 云计算是一种按使用量付费的模式,这种模式提供可用的.便捷的.按需的网络访问,进入可配置的计算资源共享池(资源包括:网络.服务器.存储.应用软件.服务),这些资源能够被快速提供,只需投入很少的管理工作,或与服务供应商进行很少的交互. 1.1云计算的特点和优势 1)云计算是一种使用模式 2)云计算必须通过网络访问 3)弹性计算,按需付费 1.2在云计算之前的模式或技术 1)IDC托管 2)IDC租用 3)虚拟主机(卖空间的) 4)VPS:虚拟专用

openstack安装配置—— 实例启动(双网络模型)

    启动实例前至少需要配置好nova和neutron服务,当然实际中cinder服务也是必须的,否则一台虚拟是可以启动,但没有数据卷也是不合常理的.启动实例之前需要事先创建好网络模型,私有网络模型是包含公有网络模型的,所以我们前面配置netron服务时直接选择了私有网络模型,当然此时我们要想启动实例,公有网络模型和私有网络模型我们都可以选择,本实验中我们会先带大家在公有网络模型下启动一个实例,私有网络模型下启动实例要比公有网络下复杂一些. 第一步:创建物理网络 [[email protect

在openstack上创建第一个虚拟机

一.创建虚拟网络 第一次创建虚拟机,使用命令行来创建 在控制节点上,加载 admin 凭证来获取管理员能执行的命令访问权限 [[email protected] ~]# source admin-openstack.sh 创建提供者网络 [[email protected] ~]# neutron net-create --shared --provider:physical_network public --provider:network_type flat public-net Creat

深入理解openstack网络架构(1)

原文地址: https://blogs.oracle.com/ronen/entry/diving_into_openstack_network_architecture 前言 openstack网络功能强大同时也相对更复杂.本系列文章通过Oracle OpenStack TechPreview介绍openstack的配置,通过各种场景和例子说明openstack各种不同的网络组件.本文的目的在于提供openstack网络架构的全景图并展示各个模块是如何一起协作的.这对openstack的初学者

openstack neutron L3 HA

作者:Liping Mao  发表于:2014-08-20 版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明 最近Assaf Muller写了一篇关于Neutron L3 HA的文章很不错. 建议看原文,地址如下: http://assafmuller.wordpress.com/category/ml2/ 大致翻译如下: L3 Agent Low Availability(L3 agent的低可用性) 目前,在Openstack中,你只能用多个网络节点达到

openstack安装配置—— block node配置

    对于云主机来说,机器可以随时销毁再创建,但数据一定不能,所以就需要数据的持久存储,openstack官方给出的数据存储方案就是cinder模块,cinder模块需要cinder服务端和cinder存储节点共同构成,在本实验中,我们把cinder服务端一并安装在了controller节点上,另行配置一台cinder存储节点,也就是我们的block节点. block节点基础配置 [[email protected] ~]# lscpu Architecture:          x86_6

openstack安装配置—— controller node配置

    实际生产环境中,每个服务模块很有可能都是一个集群,但我们这里只是带大家配置了一个实验环境,所以我们这里把keystone.nova.neutron.glance.dashboard都安装在了contoller节点上. controller节点基础配置 [[email protected] ~]# hostname controller [[email protected] ~]# lscpu Architecture:          x86_64 CPU op-mode(s):  

云计算之OpenStack实战记(二)与埋坑填坑

3.6 Nova控制节点的部署 创建nova用户,并加入到service项目中,赋予admin权限 [[email protected] ~]# source admin-openrc.sh [[email protected] ~]# openstack user create --domain default --password=nova nova +-----------+----------------------------------+ | Field     | Value