Guest CPU model configuration in libvirt with QEMU/KVM

每个hypervisor对于guest能看到的cpu model定义都不同,Xen 提供host pass through,所以guest能看到的cpu和host完全相同。

QEMU/KVM中guest能看到自定义的通用cpu model  “qemu32” or “qemu64”,而VMWare 要高级一些,它把host cpu划分成组,guest

能看到每组的baseline CPU model,这样guest就能在改组迁移。

每种CPU models都采用不同方式来暴露其architecture ,比如x86采用CPUID instruction来暴露其cpu的capabilities 。

VMWare and Xen 就直接把CPUID instruction暴露给guest,但QEMU/KVM 不仅仅支持x86,所以不能采用这种方式。

Libvirt采用baseline CPU CPUID +features的方式,其中baseline CPU CPUID是每种CPU model的最大公共子集。

比如某laptop上的cpu信息,一共拥有20个features:

# virsh capabilities

<capabilities>

  <host>

    <cpu>

      <arch>i686</arch>

      <model>pentium3</model>

      <topology sockets=‘1‘ cores=‘2‘ threads=‘1‘/>

      <feature name=‘lahf_lm‘/>

      <feature name=‘lm‘/>

      <feature name=‘xtpr‘/>

      <feature name=‘cx16‘/>

      <feature name=‘ssse3‘/>

      <feature name=‘tm2‘/>

      <feature name=‘est‘/>

      <feature name=‘vmx‘/>

      <feature name=‘ds_cpl‘/>

      <feature name=‘monitor‘/>

      <feature name=‘pni‘/>

      <feature name=‘pbe‘/>

      <feature name=‘tm‘/>

      <feature name=‘ht‘/>

      <feature name=‘ss‘/>

      <feature name=‘sse2‘/>

      <feature name=‘acpi‘/>

      <feature name=‘ds‘/>

      <feature name=‘clflush‘/>

      <feature name=‘apic‘/>

    </cpu>

    ...snip...

我们知道了libvirt如何描述cpu model和指令集,现在的问题就是暴露哪些CPU capabilities给guest。

如果数据中心中所有的cpu都是完全相同的,那么可以使用host pass through。

如果不是得话,那么久需要暴露这些cpu的公共子集。

Libvirt api提供了这些功能,把描述cpu的xml传给libvirt,它会计算出这些cpu的公共子集。

比如在另外一台server上:

<capabilities>

  <host>

    <cpu>

      <arch>x86_64</arch>

      <model>phenom</model>

      <topology sockets=‘2‘ cores=‘4‘ threads=‘1‘/>

      <feature name=‘osvw‘/>

      <feature name=‘3dnowprefetch‘/>

      <feature name=‘misalignsse‘/>

      <feature name=‘sse4a‘/>

      <feature name=‘abm‘/>

      <feature name=‘cr8legacy‘/>

      <feature name=‘extapic‘/>

      <feature name=‘cmp_legacy‘/>

      <feature name=‘lahf_lm‘/>

      <feature name=‘rdtscp‘/>

      <feature name=‘pdpe1gb‘/>

      <feature name=‘popcnt‘/>

      <feature name=‘cx16‘/>

      <feature name=‘ht‘/>

      <feature name=‘vme‘/>

    </cpu>

    ...snip...

计算该cpu是否与latptop上的cpu兼容:

$ ./tools/virsh cpu-compare cpu-server.xml

CPU described in cpu-server.xml is incompatible with host CPU

结果是不兼容因为laptop上有些指令集在server是没有。

要找到他们的子集:

# virsh cpu-baseline both-cpus.xml

<cpu match=‘exact‘>

  <model>pentium3</model>

  <feature policy=‘require‘ name=‘lahf_lm‘/>

  <feature policy=‘require‘ name=‘lm‘/>

  <feature policy=‘require‘ name=‘cx16‘/>

  <feature policy=‘require‘ name=‘monitor‘/>

  <feature policy=‘require‘ name=‘pni‘/>

  <feature policy=‘require‘ name=‘ht‘/>

  <feature policy=‘require‘ name=‘sse2‘/>

  <feature policy=‘require‘ name=‘clflush‘/>

  <feature policy=‘require‘ name=‘apic‘/>

</cpu>

子集中只有9个features。

https://www.berrange.com/posts/2010/02/15/guest-cpu-model-configuration-in-libvirt-with-qemukvm/

关于guest cpu模型的详细参数

Table 21.9. CPU model and topology elements

Element Description
<cpu> This element contains all parameters for the vCPU feature set.
<match> Specifies how closely the features indicated in the <cpu> element must match the vCPUs that are available. The match attribute can be omitted if <topology> is the only element nested in the <cpu>element. Possible values for the match attribute are:

  • minimum - The features listed are the minimum requirement. There may be more features available in the vCPU then are indicated, but this is the minimum that will be accepted. This value will fail if the minimum requirements are not met.
  • exact - the virtual CPU provided to the guest virtual machine must exactly match the features specified. If no match is found, an error will result.
  • strict - the guest virtual machine will not be created unless the host physical machine CPU exactly matches the specification.

If the match attribute is omitted from the <cpu>element, the default setting match=‘exact‘ is used.

<mode> This optional attribute may be used to make it easier to configure a guest virtual machine CPU to be as close to the host physical machine CPU as possible. Possible values for the mode attribute are:

  • custom - describes how the CPU is presented to the guest virtual machine. This is the default setting when the mode attribute is not specified. This mode makes it so that a persistent guest virtual machine will see the same hardware no matter what host physical machine the guest virtual machine is booted on.
  • host-model - this is essentially a shortcut to copying host physical machine CPU definition from the capabilities XML into the domain XML. As the CPU definition is copied just before starting a domain, the same XML can be used on different host physical machines while still providing the best guest virtual machine CPU each host physical machine supports. Neither the matchattribute nor any feature elements can be used in this mode. For more information see libvirt domain XML CPU models
  • host-passthrough With this mode, the CPU visible to the guest virtual machine is exactly the same as the host physical machine CPU including elements that cause errors within libvirt. The obvious the downside of this mode is that the guest virtual machine environment cannot be reproduced on different hardware and therefore this mode is recommended with great caution. Neither model nor feature elements are allowed in this mode.
  • Note that in both host-model and host-passthrough mode, the real (approximate in host-passthrough mode) CPU definition which would be used on current host physical machine can be determined by specifying VIR_DOMAIN_XML_UPDATE_CPU flag when calling virDomainGetXMLDesc API. When running a guest virtual machine that might be prone to operating system reactivation when presented with different hardware, and which will be migrated between host physical machines with different capabilities, you can use this output to rewrite XML to the custom mode for more robust migration.

<model> Specifies CPU model requested by the guest virtual machine. The list of available CPU models and their definition can be found in cpu_map.xml file installed in libvirt‘s data directory. If a hypervisor is not able to use the exact CPU model, libvirt automatically falls back to a closest model supported by the hypervisor while maintaining the list of CPU features. An optional fallback attribute can be used to forbid this behavior, in which case an attempt to start a domain requesting an unsupported CPU model will fail. Supported values for fallback attribute are: allow (this is the default), and forbid. The optional vendor_id attribute can be used to set the vendor id seen by the guest virtual machine. It must be exactly 12 characters long. If not set, the vendor id of the host physical machine is used. Typical possible values are AuthenticAMD and GenuineIntel.
<vendor>    Specifies CPU vendor requested by the guest virtual machine. If this element is missing, the guest virtual machine runs on a CPU matching given features regardless of its vendor. The list of supported vendors can be found in cpu_map.xml.
<topology> Specifies requested topology of virtual CPU provided to the guest virtual machine. Three non-zero values have to be given for sockets, cores, and threads: total number of CPU sockets, number of cores per socket, and number of threads per core, respectively.
<feature> Can contain zero or more elements used to fine-tune features provided by the selected CPU model. The list of known feature names can be found in the same file as CPU models. The meaning of each feature element depends on its policy attribute, which has to be set to one of the following values:

  • force - forces the virtual to be supported regardless of whether it is actually supported by host physical machine CPU.
  • require - dictates that guest virtual machine creation will fail unless the feature is supported by host physical machine CPU. This is the default setting
  • optional - this feature is supported by virtual CPU but and only if it is supported by host physical machine CPU.
  • disable - this is not supported by virtual CPU.
  • forbid - guest virtual machine creation will fail if the feature is supported by host physical machine CPU.

时间: 2024-11-03 03:47:43

Guest CPU model configuration in libvirt with QEMU/KVM的相关文章

KVM 介绍(8):使用 libvirt 迁移 QEMU/KVM 虚机和 Nova 虚机 [Nova Libvirt QEMU/KVM Live Migration]

学习 KVM 的系列文章: (1)介绍和安装 (2)CPU 和 内存虚拟化 (3)I/O QEMU 全虚拟化和准虚拟化(Para-virtulizaiton) (4)I/O PCI/PCIe设备直接分配和 SR-IOV (5)libvirt 介绍 (6)Nova 通过 libvirt 管理 QEMU/KVM 虚机 (7)快照 (snapshot) (8)迁移 (migration) 1. QEMU/KVM 迁移的概念 迁移(migration)包括系统整体的迁移和某个工作负载的迁移.系统整理迁移

KVM 介绍(6):Nova 通过 libvirt 管理 QEMU/KVM 虚机 [Nova Libvirt QEMU/KVM Domain]

学习 KVM 的系列文章: (1)介绍和安装 (2)CPU 和 内存虚拟化 (3)I/O QEMU 全虚拟化和准虚拟化(Para-virtulizaiton) (4)I/O PCI/PCIe设备直接分配和 SR-IOV (5)libvirt 介绍 (6)Nova 通过 libvirt 管理 QEMU/KVM 虚机 1. Libvirt 在 OpenStack 架构中的位置 在 Nova Compute 节点上运行的 nova-compute 服务调用 Hypervisor API 去管理运行在该

KVM(六)Nova 通过 libvirt 管理 QEMU/KVM 虚机

1. Libvirt 在 OpenStack 架构中的位置 在 Nova Compute 节点上运行的 nova-compute 服务调用 Hypervisor API 去管理运行在该 Hypervisor 的虚机.Nova 使用 libvirt 管理 QEMU/KVM 虚机,还使用别的 API 去管理别的虚机.        libvirt 的实现代码在 /nova/virt/libvirt/driver.py 文件中. 这里是 OpenStack Hypervisor Matrix. 这里是

KVM(七)使用 libvirt 做 QEMU/KVM 快照和 Nova 实例的快照

本文将梳理 QEMU/KVM 快照相关的知识,以及在 OpenStack Nova 中使用 libvirt 来对 QEMU/KVM 虚机做快照的过程. 1. QEMU/KVM 快照 1.1 概念 QEMU/KVM 快照的定义: 磁盘快照:磁盘的内容(可能是虚机的全部磁盘或者部分磁盘)在某个时间点上被保存,然后可以被恢复. 磁盘数据的保存状态: 在一个运行着的系统上,一个磁盘快照很可能只是崩溃一致的(crash-consistent) 而不是完整一致(clean)的,也是说它所保存的磁盘状态可能相

KVM 介绍(7):使用 libvirt 做 QEMU/KVM 快照和 Nova 快照 (Nova Instances Snapshot Libvirt)

学习 KVM 的系列文章: (1)介绍和安装 (2)CPU 和 内存虚拟化 (3)I/O QEMU 全虚拟化和准虚拟化(Para-virtulizaiton) (4)I/O PCI/PCIe设备直接分配和 SR-IOV (5)libvirt 介绍 (6)Nova 通过 libvirt 管理 QEMU/KVM 虚机 (7)快照 本文将梳理 QEMU/KVM 快照相关的知识,以及在 OpenStack Nova 中使用 libvirt 来对 QEMU/KVM 虚机做快照的过程. 1. QEMU/KV

干货分享: 长达250页的Libvirt Qemu KVM的ppt,不实验无真相

<iframe height=570 width=100% scrolling="no" src="http://share.csdn.net/frame/9070" frameborder=0 allowfullscreen></iframe> 1. 概论 1.1 虚拟化的基本类型 无虚拟化 半虚拟化Paravirtualization 非硬件辅助全虚拟化 硬件辅助全虚拟化 实验一:查看系统是否支持硬件辅助虚拟化 1.2 KVM Qemu

QEMU KVM Libvirt手册(7): 硬件虚拟化

在openstack中,如果我们启动一个虚拟机,我们会看到非常复杂的参数 qemu-system-x86_64 -enable-kvm -name instance-00000024 -S -machine pc-i440fx-trusty,accel=kvm,usb=off -cpu SandyBridge,+erms,+smep,+fsgsbase,+pdpe1gb,+rdrand,+f16c,+osxsave,+dca,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+v

QEMU KVM Libvirt手册(10):Managing Virtual Machines with libvirt

libvirt is a library that provides a common API for managing popular virtualization solutions, among them KVM and Xen. 使用virt-install创建image qemu-img create -f qcow2 /tmp/centos5.8.img 10G virt-install --virt-type qemu --name centos-5.8 --ram 2048 --

QEMU KVM libvirt 手册(1)

安装 对虚拟化的支持通常在BIOS中是禁掉的,必须开启才可以. 对于Intel CPU,我们可以通过下面的命令查看是否支持虚拟化. # grep "vmx" /proc/cpuinfo flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdp