[转] KVM VirtIO paravirtualized drivers: why they matter

http://www.ilsistemista.net/index.php/virtualization/42-kvm-virtio-paravirtualized-drivers-why-they-matter.html?limitstart=0

As you probably already know, there are basically two different schools in the virtualiztion champ:

  • the para-virtualization one, where a modified guest OS uses specific host-side syscall (hypercall) to do its “dirty work” with physical devices

  • the full hardware virtualization one (HVM), where the guest OS run unmodified and the host system “traps” when the guest try to access a physical device

The two approach are vastly different: the former requires extensive kernel modifications on both guest and host OSes but give you maximum performance, as both kernels are virtualization-aware and so they are optimized for the typical workload they experience. The latter approach is totally transparent to the guest OS and often do not require many kernel-level changes to the host side but, as the guest OS is not virtualization aware, it generally has lower performance.

So it appear that you had to do a conscious choice between performance and guest OS compatibility: the paravirtualized approach prioritize performance, while the HVM one prioritize compatibility. However, in this case it is possible to have the best of both worlds: by using para-virtualized guest device driver in an otherwise HVM environment, you can have compatibility and performance.

In short, a paravirtualizad device driver is a limited, targeted form of paravirtualization, useful when running specific guest OSes for which paravirtualization drivers are available. While being largely transparent to the guest OS (you simply need to install a driver), they relieve the virtualizer from emulating a real physical device (which is a complex operation, as it must emulate register, port, memory, ecc), substituting the emulation with some host-side syscall. The KVM-based framework to write paravirtualized drivers is called VirtIO.

Things are much more complex than this, of course. Anyway, in this article I am not going to explain in detail how a paravirtualized driver works, but to measure the performance implication of using it. Being a targeted paravirtualization form requiring guest-specific drivers, it is obvious that VirtIO is restricted to areas where it matter most, so disk and network subsystems are prime candidates for those paravirtualized drivers. Let see if, and how, both Linux (CentOS 6 x86-64) and Windows (Win2012R2 x64) are affected from that paravirtualized goodness.

Testbed and methods

All test run on a Dell D620 laptop. The complete system specifications are:

  • Core2 T7200 CPU @ 2.0 GHz

  • 4 GB of DDR2-667 RAM
  • Quadro NVS110 videocard (used in text-only mode)
  • a Seagate ST980825AS 7200 RPM 80 GB SATA hard disk drive (in IDE compatibility mode, as the D620‘s BIOS does not support AHCI operation)
  • CentOS 6.5 host-side OS with kernel version 2.6.32-431.1.2.0.1.el6.x86_64
  • a 512 MB ramdisk driver used for disk speed measurements

On the guest side, we have:

  • a first CentOS 6.5 guest (kernel version 2.6.32-431.1.2.0.1.el6.x86_64)

  • a second Windows 2012 R2 x64 virtual machine

The VirtIO paravirtualized drivers are already included in the standard Linux kernel, so for the CentOS guest no special action or installation was needed. On the Windows guest, I installed the VirtIO disk and network drivers from the virtio-0.1-74.iso package.

For quick disk benchmark, I used dd on the Linux side and ATTO on the Windows one. To pose additional strain on guest disk subsystem and the host virtualizer, I run all disk tests against a ramdisk drive: in this manner I was sure that eventual differences were not masked out by the slow mechanical disk. Networking speed was measured with the same tool on both VMs: iperf, version 2.0.5.

Host CPU load was measured using mpstat.

Ok, let see the numbers...

CentOS 6 x86-64 guest

The first graph shows CentOS 6 guest disk speed with and without the paravirtualized driver:

Native performances are included for reference only. We can see that para-virtualized disk driver provide a good speedup versus the standard virtualized IDE controller. Anyway, both approaches are far behind the native scores.

Net speed now:

In this case the paravirtualized network driver makes an huge difference: while it can‘t touch native speed, it is way ahead of the virtualized E1000 NIC adapter. The RTL8139 was benchmarked for pure curiosity, and it show a strange behavior: while output speed is in line with NIC speed (100 Mb/s), input speed is much higher (~400 Mb/s). Strange, but true.

While host CPU load is lower on the full virtualized NICs, it is only because they deliver much lower performance. In other word, the Mb/s per CPU load ratio is much higher on the para virtualized network driver.

Windows 2012 R2 x64 guest

Let see if Windows guest has some surprise for us. Disk benchmark first:

This time, the fully virtualized IDE driver is much behind the para-virtualized driver. In other word: always install the paravirtualized driver when dealing with Windows guests.

Network, please:

The paravirtualized driver continues to be much better then the fully virtualized NICs.

Conclusions

It is obvious that the paravirtualized drivers are an important piece of the KVM ecosystem. While the fully virtualized drivers are quite efficient and the only way to support a large variety of guest OSes, you should really use a paravirtualized driver if available for your guest virtual machine.

Obviously performance are only part of the equation, stability being even more important. Anyway I found the current VirtIO drivers release very stable, at least with the tested guests.

In short: when possible, use the VirtIO paravirtualized drivers!

[转] KVM VirtIO paravirtualized drivers: why they matter,布布扣,bubuko.com

时间: 2024-10-27 11:52:51

[转] KVM VirtIO paravirtualized drivers: why they matter的相关文章

kvm virtio使用

参考: <kvm虚拟化技术 实战解析与原理> http://tec.5lulu.com/detail/107mwn4e6aaa684c1.html http://blog.chinaunix.net/xmlrpc.php?r=blog/article&uid=7934175&id=5679365 1.balloon技术简介: 通常来说,要改变客户机的内存大小,我们需要关闭客户机,用qemu-system-x86_64重新分配.这个是很不方便的,于是balloon技术出现了. b

kvm virtio功能配置

1.virtio(virtual i/0)  本文的原理性图片来源于书本<kvm虚拟化技术 实战与原理解析> virtio 是一个在hypervisor之上的api,它对客户机的i/o操作进行优化 使用virtio前,一次客户机的i/o请求过程如下: (1)客户机的通过设备驱动程序(device driver)发起i/o请求 (2)i/o请求被kvm中的i/o操作捕获代码(i/o trap code)捕获 (3)捕获代码把这些请求缓存在i/o共享页(sharing page)并且通知qemu仿

[Virtualization][qemu][kvm][virtio] 使用 QEMU/KVM 模拟网卡多队列

序: 做DPDK例子的时候,发现一些例子需要多队列,而我当前所使用的虚拟机并不是多队列的.关于我当前虚拟机的状态,可以见前文. 所以,我的需求就是,让虚拟机里的网卡,有多队列! 参考: http://www.linux-kvm.org/page/Multiqueue https://gist.github.com/sibiaoluo/11133723 原理上没大看懂,半懂不懂的.目的优先. 查看: 如何查看网卡是否支持多队列: 红色的行就代表支持了. MSI-X就是支持多队列的意思,MSI是什么

别以为真懂Openstack: 虚拟机创建的50个步骤和100个知识点(4)

六.Libvirt 对于Libvirt,在启动虚拟机之前,首先需要define虚拟机,是一个XML格式的文件 列出所有的Instance # virsh list Id    Name                           State---------------------------------------------------- 10    instance-00000006              running # virsh dumpxml instance-000

更改CloudStack中KVM平台的Windows虚拟机默认磁盘类型为VirtIO

前言 本文的目的是为了解决在使用CloudStack(CloudPlatform)时,基于KVM虚拟化平台,Windows虚拟机的性能低下的问题. 此性能,主要指磁盘IO和网卡性能. 相关文档 由于CS文档中,只强调了PV这个概念,根据PV模式区分使用不同的硬件接口类型.所以收集部分链接给大家扫盲. 关于PV(Paravirtualization-半虚拟化)模式的概念,请参阅: http://www.rackspace.com/knowledge_center/article/choosing-

ubuntu14.04 desktop 32-bit kvm装windows xp

经过这几天来的折腾,总算是在ubuntu14.04用kvm装上了xp, 看不少的的贴,也绕了不少的圈,总的来说,非常感谢CSDN上的"上善若水75",看着他写的一个分类"QEMU-KVM"本文大部分参考自http://blog.csdn.net/hbsong75/article/category/1469881/2 从他走过的路上一点点去实现在ubuntu14.04 desktop(32bit)实现kvm装个XP,为什么要装这个?因为工作上的各种原因吧工作上: 如同

Virtio: An I/O virtualization framework for Linux

The Linux kernel supports a variety of virtualization schemes, and that's likely to grow as virtualization advances and new schemes are discovered (for example, lguest). But with all these virtualization schemes running on top of Linux, how do they e

[转] KVM I/O slowness on RHEL 6

KVM I/O slowness on RHEL 6 http://www.ilsistemista.net/index.php/virtualization/11-kvm-io-slowness-on-rhel-6.html?limitstart=0 Over one year has passed since my last virtual machine hypervisor comparison so, in the last week, I was preparing an artic

virtio 半虚拟化驱动 5.1.1

半虚拟化驱动 5.1.1 virtio概述 KVM是必须使用硬件虚拟化辅助技术(如Intel VT-x.AMD-V)的hypervisor,在CPU运行效率方面有硬件支持,其效率是比较高的:在有Intel EPT特性支持的平台上,内存虚拟化的效率也较高.QEMU/KVM提供了全虚拟化环境,可以让客户机不经过任何修改就能运行在KVM环境中.不过,KVM在I/O虚拟化方面,传统的方式是使用QEMU纯软件的方式来模拟I/O设备(如第4章中提到模拟的网卡.磁盘.显卡等等),其效率并不非常高.在KVM中,