[转] KVM/QEMU hypervisor driver

KVM/QEMU hypervisor driver

The libvirt KVM/QEMU driver can manage any QEMU emulator from version 0.8.1 or later. It can also manage Xenner, which provides the same QEMU command line syntax and monitor interaction.

Project Links

  • The KVM Linux hypervisor

  • The QEMU emulator

Deployment pre-requisites

  • QEMU emulators: The driver will probe /usr/bin for the presence of qemu, qemu-system-x86_64, qemu-system-microblaze, qemu-system-microblazeel, qemu-system-mips,qemu-system-mipsel, qemu-system-sparc,qemu-system-ppc. The results of this can be seen from the capabilities XML output.

  • KVM hypervisor: The driver will probe /usr/bin for the presence of qemu-kvm and /dev/kvm device node. If both are found, then KVM fullyvirtualized, hardware accelerated guests will be available.
  • Xenner hypervisor: The driver will probe /usr/bin for the presence of xenner and /dev/kvm device node. If both are found, then Xen paravirtualized guests can be run using the KVM hardware acceleration.

Connections to QEMU driver

The libvirt QEMU driver is a multi-instance driver, providing a single system wide privileged driver (the "system" instance), and per-user unprivileged drivers (the "session" instance). The URI driver protocol is "qemu". Some example connection URIs for the libvirt driver are:

qemu:///session                      (local access to per-user instance)
qemu+unix:///session                 (local access to per-user instance)

qemu:///system                       (local access to system instance)
qemu+unix:///system                  (local access to system instance)
qemu://example.com/system            (remote access, TLS/x509)
qemu+tcp://example.com/system        (remote access, SASl/Kerberos)
qemu+ssh://[email protected]/system   (remote access, SSH tunnelled)

Driver security architecture

There are multiple layers to security in the QEMU driver, allowing for flexibility in the use of QEMU based virtual machines.

Driver instances

As explained above there are two ways to access the QEMU driver in libvirt. The "qemu:///session" family of URIs connect to a libvirtd instance running as the same user/group ID as the client application. Thus the QEMU instances spawned from this driver will share the same privileges as the client application. The intended use case for this driver is desktop virtualization, with virtual machines storing their disk images in the user‘s home directory and being managed from the local desktop login session.

The "qemu:///system" family of URIs connect to a libvirtd instance running as the privileged system account ‘root‘. Thus the QEMU instances spawned from this driver may have much higher privileges than the client application managing them. The intended use case for this driver is server virtualization, where the virtual machines may need to be connected to host resources (block, PCI, USB, network devices) whose access requires elevated privileges.

POSIX users/groups

In the "session" instance, the POSIX users/groups model restricts QEMU virtual machines (and libvirtd in general) to only have access to resources with the same user/group ID as the client application. There is no finer level of configuration possible for the "session" instances.

In the "system" instance, libvirt releases from 0.7.0 onwards allow control over the user/group that the QEMU virtual machines are run as. A build of libvirt with no configuration parameters set will still run QEMU processes as root:root. It is possible to change this default by using the --with-qemu-user=$USERNAME and --with-qemu-group=$GROUPNAME arguments to ‘configure‘ during build. It is strongly recommended that vendors build with both of these arguments set to ‘qemu‘. Regardless of this build time default, administrators can set a per-host default setting in the /etc/libvirt/qemu.conf configuration file via the user=$USERNAME and group=$GROUPNAME parameters. When a non-root user or group is configured, the libvirt QEMU driver will change uid/gid to match immediately before executing the QEMU binary for a virtual machine.

If QEMU virtual machines from the "system" instance are being run as non-root, there will be greater restrictions on what host resources the QEMU process will be able to access. The libvirtd daemon will attempt to manage permissions on resources to minimise the likelihood of unintentional security denials, but the administrator / application developer must be aware of some of the consequences / restrictions.

  • The directories /var/run/libvirt/qemu/, /var/lib/libvirt/qemu/ and /var/cache/libvirt/qemu/ must all have their ownership set to match the user / group ID that QEMU guests will be run as. If the vendor has set a non-root user/group for the QEMU driver at build time, the permissions should be set automatically at install time. If a host administrator customizes user/group in /etc/libvirt/qemu.conf, they will need to manually set the ownership on these directories.

  • When attaching USB and PCI devices to a QEMU guest, QEMU will need to access files in /dev/bus/usb and /sys/bus/pci/devices respectively. The libvirtd daemon will automatically set the ownership on specific devices that are assigned to a guest at start time. There should not be any need for administrator changes in this respect.
  • Any files/devices used as guest disk images must be accessible to the user/group ID that QEMU guests are configured to run as. The libvirtd daemon will automatically set the ownership of the file/device path to the correct user/group ID. Applications / administrators must be aware though that the parent directory permissions may still deny access. The directories containing disk images must either have their ownership set to match the user/group configured for QEMU, or their UNIX file permissions must have the ‘execute/search‘ bit enabled for ‘others‘.

    The simplest option is the latter one, of just enabling the ‘execute/search‘ bit. For any directory to be used for storing disk images, this can be achieved by running the following command on the directory itself, and any parent directories

    chmod o+x /path/to/directory
    

    In particular note that if using the "system" instance and attempting to store disk images in a user home directory, the default permissions on $HOME are typically too restrictive to allow access.

Linux process capabilities

The libvirt QEMU driver has a build time option allowing it to use the libcap-ng library to manage process capabilities. If this build option is enabled, then the QEMU driver will use this to ensure that all process capabilities are dropped before executing a QEMU virtual machine. Process capabilities are what gives the ‘root‘ account its high power, in particular the CAP_DAC_OVERRIDE capability is what allows a process running as ‘root‘ to access files owned by any user.

If the QEMU driver is configured to run virtual machines as non-root, then they will already lose all their process capabilities at time of startup. The Linux capability feature is thus aimed primarily at the scenario where the QEMU processes are running as root. In this case, before launching a QEMU virtual machine, libvirtd will use libcap-ng APIs to drop all process capabilities. It is important for administrators to note that this implies the QEMU process will only be able to access files owned by root, and not files owned by any other user.

Thus, if a vendor / distributor has configured their libvirt package to run as ‘qemu‘ by default, a number of changes will be required before an administrator can change a host to run guests as root. In particular it will be necessary to change ownership on the directories /var/run/libvirt/qemu/, /var/lib/libvirt/qemu/ and /var/cache/libvirt/qemu/ back to root, in addition to changing the /etc/libvirt/qemu.conf settings.

SELinux basic confinement

The basic SELinux protection for QEMU virtual machines is intended to protect the host OS from a compromised virtual machine process. There is no protection between guests.

In the basic model, all QEMU virtual machines run under the confined domain root:system_r:qemu_t. It is required that any disk image assigned to a QEMU virtual machine is labelled with system_u:object_r:virt_image_t. In a default deployment, package vendors/distributor will typically ensure that the directory /var/lib/libvirt/images has this label, such that any disk images created in this directory will automatically inherit the correct labelling. If attempting to use disk images in another location, the user/administrator must ensure the directory has be given this requisite label. Likewise physical block devices must be labelled system_u:object_r:virt_image_t.

Not all filesystems allow for labelling of individual files. In particular NFS, VFat and NTFS have no support for labelling. In these cases administrators must use the ‘context‘ option when mounting the filesystem to set the default label to system_u:object_r:virt_image_t. In the case of NFS, there is an alternative option, of enabling the virt_use_nfs SELinux boolean.

SELinux sVirt confinement

The SELinux sVirt protection for QEMU virtual machines builds to the basic level of protection, to also allow individual guests to be protected from each other.

In the sVirt model, each QEMU virtual machine runs under its own confined domain, which is based on system_u:system_r:svirt_t:s0 with a unique category appended, eg, system_u:system_r:svirt_t:s0:c34,c44. The rules are setup such that a domain can only access files which are labelled with the matching category level, eg system_u:object_r:svirt_image_t:s0:c34,c44. This prevents one QEMU process accessing any file resources that are prevent to another QEMU process.

There are two ways of assigning labels to virtual machines under sVirt. In the default setup, if sVirt is enabled, guests will get an automatically assigned unique label each time they are booted. The libvirtd daemon will also automatically relabel exclusive access disk images to match this label. Disks that are marked as <shared> will get a generic label system_u:system_r:svirt_image_t:s0 allowing all guests read/write access them, while disks marked as <readonly> will get a generic label system_u:system_r:svirt_content_t:s0 which allows all guests read-only access.

With statically assigned labels, the application should include the desired guest and file labels in the XML at time of creating the guest with libvirt. In this scenario the application is responsible for ensuring the disk images & similar resources are suitably labelled to match, libvirtd will not attempt any relabelling.

If the sVirt security model is active, then the node capabilities XML will include its details. If a virtual machine is currently protected by the security model, then the guest XML will include its assigned labels. If enabled at compile time, the sVirt security model will always be activated if SELinux is available on the host OS. To disable sVirt, and revert to the basic level of SELinux protection (host protection only), the /etc/libvirt/qemu.conf file can be used to change the setting to security_driver="none"

AppArmor sVirt confinement

When using basic AppArmor protection for the libvirtd daemon and QEMU virtual machines, the intention is to protect the host OS from a compromised virtual machine process. There is no protection between guests.

The AppArmor sVirt protection for QEMU virtual machines builds on this basic level of protection, to also allow individual guests to be protected from each other.

In the sVirt model, if a profile is loaded for the libvirtd daemon, then each qemu:///system QEMU virtual machine will have a profile created for it when the virtual machine is started if one does not already exist. This generated profile uses a profile name based on the UUID of the QEMU virtual machine and contains rules allowing access to only the files it needs to run, such as its disks, pid file and log files. Just before the QEMU virtual machine is started, the libvirtd daemon will change into this unique profile, preventing the QEMU process from accessing any file resources that are present in another QEMU process or the host machine.

The AppArmor sVirt implementation is flexible in that it allows an administrator to customize the template file in /etc/apparmor.d/libvirt/TEMPLATE for site-specific access for all newly created QEMU virtual machines. Also, when a new profile is generated, two files are created: /etc/apparmor.d/libvirt/libvirt-<uuid> and /etc/apparmor.d/libvirt/libvirt-<uuid>.files. The former can be fine-tuned by the administrator to allow custom access for this particular QEMU virtual machine, and the latter will be updated appropriately when required file access changes, such as when a disk is added. This flexibility allows for situations such as having one virtual machine in complain mode with all others in enforce mode.

While users can define their own AppArmor profile scheme, a typical configuration will include a profile for /usr/sbin/libvirtd, /usr/lib/libvirt/virt-aa-helper (a helper program which the libvirtd daemon uses instead of manipulating AppArmor directly), and an abstraction to be included by /etc/apparmor.d/libvirt/TEMPLATE (typically /etc/apparmor.d/abstractions/libvirt-qemu). An example profile scheme can be found in the examples/apparmor directory of the source distribution.

If the sVirt security model is active, then the node capabilities XML will include its details. If a virtual machine is currently protected by the security model, then the guest XML will include its assigned profile name. If enabled at compile time, the sVirt security model will be activated if AppArmor is available on the host OS and a profile for the libvirtd daemon is loaded when libvirtd is started. To disable sVirt, and revert to the basic level of AppArmor protection (host protection only), the /etc/libvirt/qemu.conf file can be used to change the setting to security_driver="none".

Cgroups device ACLs

Recent Linux kernels have a capability known as "cgroups" which is used for resource management. It is implemented via a number of "controllers", each controller covering a specific task/functional area. One of the available controllers is the "devices" controller, which is able to setup whitelists of block/character devices that a cgroup should be allowed to access. If the "devices" controller is mounted on a host, then libvirt will automatically create a dedicated cgroup for each QEMU virtual machine and setup the device whitelist so that the QEMU process can only access shared devices, and explicitly disks images backed by block devices.

The list of shared devices a guest is allowed access to is

/dev/null, /dev/full, /dev/zero,
/dev/random, /dev/urandom,
/dev/ptmx, /dev/kvm, /dev/kqemu,
/dev/rtc, /dev/hpet, /dev/net/tun

In the event of unanticipated needs arising, this can be customized via the /etc/libvirt/qemu.conf file. To mount the cgroups device controller, the following command should be run as root, prior to starting libvirtd

mkdir /dev/cgroup
mount -t cgroup none /dev/cgroup -o devices

libvirt will then place each virtual machine in a cgroup at /dev/cgroup/libvirt/qemu/$VMNAME/

Import and export of libvirt domain XML configs

The QEMU driver currently supports a single native config format known as qemu-argv. The data for this format is expected to be a single line first a list of environment variables, then the QEMu binary name, finally followed by the QEMU command line arguments

Converting from QEMU args to domain XML

The virsh domxml-from-native provides a way to convert an existing set of QEMU args into a guest description using libvirt Domain XML that can then be used by libvirt. Please note that this command is intended to be used to convert existing qemu guests previously started from the command line to be managed through libvirt. It should not be used a method of creating new guests from scratch. New guests should be created using an application calling the libvirt APIs (see the libvirt applications page for some examples) or by manually crafting XML to pass to virsh.

$ cat > demo.args <<EOF
LC_ALL=C PATH=/bin HOME=/home/test USER=test LOGNAME=test /usr/bin/qemu -S -M pc -m 214 -smp 1 -nographic -monitor pty -no-acpi -boot c -hda /dev/HostVG/QEMUGuest1 -net none -serial none -parallel none -usb
EOF

$ virsh domxml-from-native qemu-argv demo.args
<domain type=‘qemu‘>
  <uuid>00000000-0000-0000-0000-000000000000</uuid>
  <memory>219136</memory>
  <currentMemory>219136</currentMemory>
  <vcpu>1</vcpu>
  <os>
    <type arch=‘i686‘ machine=‘pc‘>hvm</type>
    <boot dev=‘hd‘/>
  </os>
  <clock offset=‘utc‘/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu</emulator>
    <disk type=‘block‘ device=‘disk‘>
      <source dev=‘/dev/HostVG/QEMUGuest1‘/>
      <target dev=‘hda‘ bus=‘ide‘/>
    </disk>
  </devices>
</domain>

NB, don‘t include the literal \ in the args, put everything on one line

Converting from domain XML to QEMU args

The virsh domxml-to-native provides a way to convert a guest description using libvirt Domain XML, into a set of QEMU args that can be run manually.

$ cat > demo.xml <<EOF
<domain type=‘qemu‘>
  <name>QEMUGuest1</name>
  <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid>
  <memory>219200</memory>
  <currentMemory>219200</currentMemory>
  <vcpu>1</vcpu>
  <os>
    <type arch=‘i686‘ machine=‘pc‘>hvm</type>
    <boot dev=‘hd‘/>
  </os>
  <clock offset=‘utc‘/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu</emulator>
    <disk type=‘block‘ device=‘disk‘>
      <source dev=‘/dev/HostVG/QEMUGuest1‘/>
      <target dev=‘hda‘ bus=‘ide‘/>
    </disk>
  </devices>
</domain>
EOF

$ virsh domxml-to-native qemu-argv demo.xml
  LC_ALL=C PATH=/usr/bin:/bin HOME=/home/test   USER=test LOGNAME=test /usr/bin/qemu -S -M pc   -no-kqemu -m 214 -smp 1 -name QEMUGuest1 -nographic   -monitor pty -no-acpi -boot c -drive   file=/dev/HostVG/QEMUGuest1,if=ide,index=0 -net none   -serial none -parallel none -usb

Pass-through of arbitrary qemu commands

Libvirt provides an XML namespace and an optional library libvirt-qemu.so for dealing specifically with qemu. When used correctly, these extensions allow testing specific qemu features that have not yet been ported to the generic libvirt XML and API interfaces. However, they are unsupported, in that the library is not guaranteed to have a stable API, abusing the library or XML may result in inconsistent state the crashes libvirtd, and upgrading either qemu-kvm or libvirtd may break behavior of a domain that was relying on a qemu-specific pass-through. If you find yourself needing to use them to access a particular qemu feature, then please post an RFE to the libvirt mailing list to get that feature incorporated into the stable libvirt XML and API interfaces.

The library provides two API: virDomainQemuMonitorCommand, for sending an arbitrary monitor command (in either HMP or QMP format) to a qemu guest (Since 0.8.3), and virDomainQemuAttach, for registering a qemu domain that was manually started so that it can then be managed by libvirtd (Since 0.9.4).

Additionally, the following XML additions allow fine-tuning of the command line given to qemu when starting a domain (Since 0.8.3). In order to use the XML additions, it is necessary to issue an XML namespace request (the special xmlns:name attribute) that pulls in http://libvirt.org/schemas/domain/qemu/1.0; typically, the namespace is given the name of qemu. With the namespace in place, it is then possible to add an element <qemu:commandline> under driver, with the following sub-elements repeated as often as needed:

qemu:arg

Add an additional command-line argument to the qemu process when starting the domain, given by the value of the attribute value.

qemu:env

Add an additional environment variable to the qemu process when starting the domain, given with the name-value pair recorded in the attributes name and optional value.

Example:

<domain type=‘qemu‘ xmlns:qemu=‘http://libvirt.org/schemas/domain/qemu/1.0‘>
  <name>QEmu-fedora-i686</name>
  <memory>219200</memory>
  <os>
    <type arch=‘i686‘ machine=‘pc‘>hvm</type>
  </os>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
  </devices>
  <qemu:commandline>
    <qemu:arg value=‘-newarg‘/>
    <qemu:env name=‘QEMU_ENV‘ value=‘VAL‘/>
  </qemu:commandline>
</domain>

Example domain XML config

QEMU emulated guest on x86_64
<domain type=‘qemu‘>
  <name>QEmu-fedora-i686</name>
  <uuid>c7a5fdbd-cdaf-9455-926a-d65c16db1809</uuid>
  <memory>219200</memory>
  <currentMemory>219200</currentMemory>
  <vcpu>2</vcpu>
  <os>
    <type arch=‘i686‘ machine=‘pc‘>hvm</type>
    <boot dev=‘cdrom‘/>
  </os>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type=‘file‘ device=‘cdrom‘>
      <source file=‘/home/user/boot.iso‘/>
      <target dev=‘hdc‘/>
      <readonly/>
    </disk>
    <disk type=‘file‘ device=‘disk‘>
      <source file=‘/home/user/fedora.img‘/>
      <target dev=‘hda‘/>
    </disk>
    <interface type=‘network‘>
      <source network=‘default‘/>
    </interface>
    <graphics type=‘vnc‘ port=‘-1‘/>
  </devices>
</domain>
KVM hardware accelerated guest on i686
<domain type=‘kvm‘>
  <name>demo2</name>
  <uuid>4dea24b3-1d52-d8f3-2516-782e98a23fa0</uuid>
  <memory>131072</memory>
  <vcpu>1</vcpu>
  <os>
    <type arch="i686">hvm</type>
  </os>
  <clock sync="localtime"/>
  <devices>
    <emulator>/usr/bin/qemu-kvm</emulator>
    <disk type=‘file‘ device=‘disk‘>
      <source file=‘/var/lib/libvirt/images/demo2.img‘/>
      <target dev=‘hda‘/>
    </disk>
    <interface type=‘network‘>
      <source network=‘default‘/>
      <mac address=‘24:42:53:21:52:45‘/>
    </interface>
    <graphics type=‘vnc‘ port=‘-1‘ keymap=‘de‘/>
  </devices>
</domain>
Xen paravirtualized guests with hardware acceleration

[转] KVM/QEMU hypervisor driver

时间: 2024-10-17 15:45:48

[转] KVM/QEMU hypervisor driver的相关文章

更新linux kernel到3.14.10 LTS版后,virt-manager无法识别qemu hypervisor的问题

系统:ubuntu12.04LTS 内核:3.14.10 最近在做一个虚拟机安全的问题,使用KVM来实现虚拟化,昨天重新编译升级内核到3.14.10后,发现virt-manager无法识别qemu hypervisor了,在新建虚拟机的界面卡住了,在/etc/libvirt/qemu下用virsh define ubuntu1204s.xml来定义虚拟机,结果提示无法识别hvm,这个问题在没有升级内核前还不存在. 由于本人的qemu-kvm是自己编译安装的,使用的是sourceforge上下载的

kvm qemu内幕介绍

转自:http://blog.csdn.net/wj_j2ee/article/details/7978259目录 1 硬件虚拟化技术背景 2 KVM的内部实现概述 2.1 KVM的抽象对象 2.2 KVM的vcpu 2.3 KVM的IO虚拟化 2.3.1 IO的虚拟化 2.3.2 VirtIO 3 KVM-IO可能优化地方 3.1 Virt-IO的硬盘优化 3.2 普通设备的直接分配(Direct Assign) 3.3 普通设备的复用 ============================

Linux实现KVM+QEMU+libvirt的虚拟机环境 并使用virsh对虚拟机进行管理

说明: 本文使用的实验环境是运行在windows10上的Vmware workstation 12.5 pro,宿主机操作系统是Ubuntu16(机器名称为KVM_test),kvm+qemu+libvirt安装在KVM_test上.运行在KVM_test上的客户机操作系统也是Ubuntu16(机器名称为test_ubuntu). 本实验需要的软件有Vmware workstation.vnc viewer.ubuntu16的ios镜像.Vnc viewer需要注册码,请自行百度查找. 本实验所

[转] Kvm Qemu Libvirt

如需转载,请标明原文出处以及作者 陈锐 Rui Chen @kiwik 2014/5/4 17:53:39 写在最前面: 这段时间一直在墨西哥出差,其中遇到了各种糟心的事儿,关注我微博的同学可能都知道,但是要说的是,也有一些收获,一个就是终于在30岁的时候在墨西哥找到了一点点学英语的小窍门:另一个就是这段时间一直在想办法实现一个Ceilometer的blueprint,由于用到了 libvirt,QEMU和KVM,对虚拟化的理解有了一点点进步,总结了一下,写下这篇blog. 这篇文章是对KVM,

KVM+Qemu+Libvirt实战

上一篇的文章是为了给这一篇文件提供理论的基础,在这篇文章中我将带大家一起来实现在linux中虚拟出ubuntu的server版来 我们需要用KVM+Qemu+Libvirt来进行kvm全虚拟化,创建虚拟机并管理. kvm是用来虚拟化或者说模拟CPU.内存等硬件的. QEMU只是用来虚拟化硬盘的 libvirt提供了整个虚拟机的管理,比如说虚拟机的启动,停止,创建,删除等等. 其实KVM+Qemu+Libvirt就是模拟了一个VMWare软件 环境: 宿主机:ubuntu16.04的server版

KVM,QEMU核心分析

目前正在学习虚拟化软件KVM相关运行原理.过程,对源码的分析进行了总结,只是为了学习交流使用,若有不正确的地方,希望大家提出. 总入口:我的个人blog:luoye.me 文章列表(可直接点击进入) 1. kvm安装与启动过程说明 2. kvm安装与启动过程说明-Kernel源码编译方式 3. KVM硬件辅助虚拟化之 EPT(Extended Page Table 4. KVM硬件辅助虚拟化之 EPT in Nested Virtualization 5. KVM-Introduce 6. KV

kvm/qemu虚拟机桥接网络创建与配置

首先阐述一下kvm与qemu的关系,kvm是修改过的qemu,而且使用了硬件支持的仿真,仿真速度比QEMU快. 配置kvm/qemu的网络有两种方法.其一,默认方式为用户模式网络(Usermode Networking),数据包由NAT方式通过主机的接口进行传送. 其二,使用桥接方式(Bridged Networking),外部的机器可以直接联通到虚拟机,就像联通到你的主机一样. 第一,用户模式,虚拟机可以使用网络服务,但局域网中其他机器包括宿主机无法连接它.比如,它可以浏览网页,但外部机器不能

KVM/QEMU简介

转载: http://blog.csdn.net/chdhust/article/details/7557791 VM虚拟机是基于linux内核虚拟化,自linux2.6.20之后就集成在 linux的各个主要发行版本中.它使用linux自身的调度器进行管理,所以相对于xen,其核心源码很少.KVM的虚拟化需要硬件的支持(如 intel VT技术或者AMD V技术),是基于硬件的完全虚拟化.而xen早期则是基于软件模拟的para-virtualization,新版本是基于硬件支持的完全虚拟化.

kvm/qemu/libvirt学习笔记 (1) qemu/kvm/libvirt介绍及虚拟化环境的安装

kvm简介 kvm最初由Quramnet公司开发,2008年被RedHat公司收购.kvm全称基于内核的虚拟机(Kernel-based Virtual Machine),它是Linux的一个内核模块.包括核心虚拟化模块kvm.ko,以及特定CPU的模块kvm-inet.ko或kvm-amd.ko,其实现需要宿主机的CPU支持硬件虚拟化.从Linux内核版本2.6.20开始,kvm模块就已经包含在Linux内核中了.在X86平台下CPU的硬件虚拟化技术有Inetl的VT-X和AMD的AMD-V.