E1000, E1000E and VMXNET3 performance test

After reading some posts and blogs on vSphere5 and E1000E performance my curiosity was triggered to see if actually all these claims make sense and how vSphere actually behaves when testing.

Test setup

The setup I used is similar as described inhttp://www.vmware.com/pdf/vsp_4_vmxnet3_perf.pdf. The setup looks like:

Bare metal server (Client): B22-M3, 16GB, 2xE5-2450, 1280VIC
vSphere ESXi 5.0 server: B200-M3, 64GB, 2xE5-2680, 1280VIC
To accommodate the tests, the 1280VIC’s are connected to 2108 IOM’s and we are only using Fabric A / 6248-A.

The VM is configured in the following way (screenshot):

  • Local Area Connection: E1000
  • Local Area Connection 2: VMXNET3
  • Local Area Connection 3: E1000E
  • 4GB Memory, 1 vCPU
  • Windows 2008R2

Test results

The following is a result of the best performance test I did run

RAW DATA

ADAPTER WIN NET WIN CPU VM CPU VM NET FEX NET GRAPH
VMXNET3 9715 53% 82.57% 9493.67 9.92 LINK
E1000 9784 67% 118.89% 9491.87 9.99 LINK
E1000E 9654 66% 91.77% 9469.47 10.0 LINK

COLUMN EXPLANATIONS:

COLUMN DESCRIPTION
WIN NET AVERAGE TRANSMISSION IN MBIT/S ON WINDOWS
WIN CPU AVERAGE CPU LOAD ON WINDOWS MEASURED
VM CPU %USED COUNTER IN ESXTOP
VM NET MBTX/S IN ESXTOP
FEX NET TX BIT RATE IN GBPS AS SEEN BY THE 2108 IOM MODULE

Data interpretation

We can clearly see that all adapters can be filled, full line speed. There are small differences but these could very much be due to sampling periods etc…

There is a higher CPU usage seen for E1000 and E1000E adapters, for both WIN CPU and VM CPU. I think however only for E1000 there is a high penalty where for E1000E this stays within acceptable limits.

Disclaimers

I’m not a bench guy neither is this my job, hence these figures are just my personal observation and by no means are they a result of a full professional benchmark. They are however fully reproducible.

The attached graphs do show some dips, I did not further look into them. I know technically why they are there, but did not look into fixing them.

This entry was posted in vSphere by admin. Bookmark the permalink.

时间: 2024-08-20 07:46:20

E1000, E1000E and VMXNET3 performance test的相关文章

Linux 虚拟机网络适配器从E1000改为VMXNET3

我们知道VMware的网络适配器类型有多种,例如E1000.VMXNET.VMXNET 2 (Enhanced).VMXNET3等,就性能而言,一般VMXNET3要优于E1000,下面介绍如果将Linux虚拟机的网络适配器类型从E1000改为VMXNET3.本文测试环境如下 操作系统   :Oracle Linux Server release 5.7 虚拟机版本 :Vmware 5.1 如下所示,测试前,我们可以先看看网卡类型为e1000, 此版本的Linux似乎没有配置文件/etc/udev

为Linux虚拟机更改网卡类型从E1000到VMXNET3

用户为什么要从E1000调整为VMXNET3,理由如下: E1000是千兆网路卡,而VMXNET3是万兆网路卡: E1000的性能相对较低,而VMXNET3的性能相对较高: VMXNET3支持TCP/IP Offload Engine,E1000不支持: VMXNET3可以直接和vmkernel通讯,执行内部数据处理: 我们知道VMware的网络适配器类型有多种,例如E1000.VMXNET. VMXNET 2 (Enhanced).VMXNET3等,就性能而言,一般VMXNET3要优于E100

CloudStack全局配置參数

參数 描写叙述 类型 默认值 account.cleanup.interval 清除用户账户所须要等待的时间(秒) 整数 86400 agent.lb.enabled If agent load balancing enabled in cluster setup true/false false agent.load.threshold  Percentage (as a value between 0 and 1) of connected agents after which agent

企业云桌面-05-准备服务器虚拟化主机esxi 6.5

作者:学 无 止 境 QQ交流群:454544014 注意: <企业云桌面>系列博文是<企业云桌面规划.部署与运维实践指南>的基础部分,因为书中内容涉及非常多,非常全面,所以基础部分将以博文的形式给读者展现,将在书中引用. <企业云桌面规划.部署与运维实践指南>将以某社保中心云桌面为中心,采用VMware Workstation Pro 12.5.2在1台物理机上模拟.读者按书中的步骤一步一步去做,就可以实现.     1) 虚拟化主机的的计算机.IP.等信息如表2-1

企业云桌面-08-准备服务器虚拟化主机esxi 6.5-For-vSAN

作者:学 无 止 境 QQ交流群:454544014 注意: <企业云桌面>系列博文是<企业云桌面规划.部署与运维实践指南>的基础部分,因为书中内容涉及非常多,非常全面,所以基础部分将以博文的形式给读者展现,将在书中引用. <企业云桌面规划.部署与运维实践指南>将以某社保中心云桌面为中心,采用VMware Workstation Pro 12.5.2在1台物理机上模拟.读者按书中的步骤一步一步去做,就可以实现.     1) 虚拟化主机的的计算机.IP.等信息如表2-1

009.CentOS 6.7安装运行netmap

一.netmap简介: 1.netmap是一个高性能收发原始数据包的框架,由Luigi Rizzo等人开发完成,其包含了内核模块以及用户态库函数.其目标是,不修改现有操作系统软件以及不需要特殊硬件支持,实现用户态和网卡之间数据包的高性能传递.(照搬的!) 2.netmap通过自带的网卡驱动直接接管网卡,运行时申请一块固定的内存池,用于接受网卡上到来的数据包以及需要发送给网卡的数据包. 3.netmap目前只支持几种网卡的驱动,官网上介绍的是Intel ixgbe (10G),ixl (10/40

CloudStack全局配置参数

account.cleanup.interval 清除用户账户所需要等待的时间(秒):类型:整数:默认86400 agent.lb.enabled    false    If agent load balancing enabled in cluster setup agent.load.threshold    0.7    Percentage (as a value between 0 and 1) of connected agents after which agent load b

Linux--NiaoGe-Service-04

操作系统版本:CentOS 6.10 x86_64 查看内核所获取到的网卡信息 [[email protected] ~]# dmesg | grep -in eth 1775:e1000 0000:02:01.0: eth0: (PCI:66MHz:32-bit) 00:0c:29:6b:6e:1b 1776:e1000 0000:02:01.0: eth0: Intel(R) PRO/1000 Network Connection 1804:e1000: eth0 NIC Link is U

虚拟服务器使用E1000E类型的网卡,可能会 导致故障转移功能通信异常,不稳定

最近在搭建Windows Server 2012故障转移运行环境,计划在上面运行SQL Server 2012 AlwaysOn,虚拟服务器是运行在VMware ESXi 5.5.0平台上,发现如果虚拟服务器使用E1000E(下图)类型的网卡时,故障转移通信经常会出现故障,两台服务器互相ping时,偶尔有丢包:如果是使用E1000,VMXNET3类型的网卡,则故障转移通信正常,互ping也没有丢包.