Live disk migration with libvirt blockcopy

nova采用 libvirt blockcopy(python  API virDomainBlockRebase)来做live snapshot.

Create the base image:

 $ qemu-img create -f qcow2 base 1G
$ guestfish -a base.qcow2 
[. . .]
><fs> run
><fs> part-disk /dev/sda mbr
><fs> mkfs ext4 /dev/sda1
><fs> mount /dev/sda1 /
><fs> touch /foo
><fs> ls /
foo
><fs> exit

Create another QCOW2 overlay snapshot ‘snap1’, with backing file as ‘base’:

$ qemu-img create -f qcow2 -b base.qcow2   -o backing_fmt=qcow2 snap1.qcow2

Add a file to snap1.qcow2:

$ guestfish -a snap1.qcow2
[. . .]
><fs> run
><fs> part-disk /dev/sda mbr
><fs> mkfs ext4 /dev/sda1
><fs> mount /dev/sda1 /
><fs> touch /bar
><fs> ls /
bar
baz
foo
lost+found
><fs> exit

Create another QCOW2 overlay snapshot ‘snap2’, with backing file as ‘snap1’:

$ qemu-img create -f qcow2 -b snap1.qcow2   -o backing_fmt=qcow2 snap2.qcow2

Add another test file ‘baz’ into snap2.qcow2 using guestfish (refer to previous examples above) to distinguish contents of base, snap1 and snap2.

Create a simple libvirt XML file as below, with source file pointing to snap2.qcow2 — which will be the active block device (i.e. it tracks all new guest writes):

$ cat <<EOF > /etc/libvirt/qemu/testvm.xml
<domain type=‘kvm‘>
  <name>testvm</name>
  <memory unit=‘MiB‘>512</memory>
  <vcpu>1</vcpu>
  <os>
    <type arch=‘x86_64‘>hvm</type>
  </os>
  <devices>
    <disk type=‘file‘ device=‘disk‘>
      <driver name=‘qemu‘ type=‘qcow2‘/>
      <source file=‘/export/vmimages/snap2.qcow2‘/>
      <target dev=‘vda‘ bus=‘virtio‘/>
    </disk>
  </devices>
</domain>
EOF

Define the guest and start it:

$ virsh define etc/libvirt/qemu/testvm.xml
  Domain testvm defined from /etc/libvirt/qemu/testvm.xml
$ virsh start testvm
Domain testvm started

Perform live disk migration
Undefine the running libvirt guest to make it transient[*]:

$ virsh dumpxml --inactive testvm > /var/tmp/testvm.xml
$ virsh undefine testvm

Check what is the current block device before performing live disk migration:

$ virsh domblklist testvm
Target     Source
------------------------------------------------
vda        /export/vmimages/snap2.qcow2

Optionally, display the backing chain of snap2.qcow2:

$ qemu-img info --backing-chain /export/vmimages/snap2.qcow2
[. . .] # Output removed for brevity

Initiate blockcopy (live disk mirroring):

$ virsh blockcopy --domain testvm vda   /export/blockcopy-test/backups/copy.qcow2   --wait --verbose --shallow   --pivot

Details of the above command: It creates copy.qcow2 file in the specified path; performs a --shallow blockcopy (i.e. the ‘copy’ shares the backing chain) of the current block device (vda); –pivot will pivot the live QEMU to the ‘copy’.

Confirm that QEMU has pivoted to the ‘copy’ by enumerating the current block device in use:

$ virsh domblklist testvm
Target     Source
------------------------------------------------
vda        /export/vmimages/copy.qcow2

Again, display the backing chain of ‘copy’, it should be the resultant chain as noted in the Scenario section above).

$ qemu-img info --backing-chain /export/vmimages/copy.qcow2

Enumerate the contents of copy.qcow2:

$ guestfish -a copy.qcow2 
[. . .]
><fs> run
><fs> mount /dev/sda1 /
><fs> ls /
bar
foo
baz
lost+found
><fs> quit

(You can notice above: all the content from base.qcow2, snap1.qcow2, and snap2.qcow2 mirrored into copy.qcow2.)

Edit the libvirt guest XML to use the copy.qcow2, and define it:

$ virsh edit testvm
# Replace the <source file=‘/export/vmimages/snap2.qcow2‘/>
# with <source file=‘/export/vmimages/copy.qcow2‘/>
[. . .] 

$ virsh define /var/tmp/testvm.xml

[*] Reason for the undefining and defining the guest again: As of writing this, QEMU has to support persistent dirty bitmap — this enables us to restart a QEMU process with disk mirroring intact. There are some in-progress patches upstream for a while. Until they are in main line QEMU, the current approach (as illustrated above) is: make a running libvirt guest transient temporarily, perform live blockcopy, and make the guest persistent again. (Thanks to Eric Blake, one of libvirt project’s principal developers, for this detail.)

http://kashyapc.com/2014/07/06/live-disk-migration-with-libvirt-blockcopy/

 
时间: 2024-12-19 03:34:53

Live disk migration with libvirt blockcopy的相关文章

kvm虚拟化管理

虚拟化 KVM (kernel-based virtual machine) 常见的一些虚拟化的软件xen kvm vmware esx openVZ Oracle VM VirtualBox vsphere rhel5 xen rhel6 kvm rhel7 kvm 半(准)虚拟化: 客户机器操作系统内核必须是修改过的,才能使用半虚拟化. 硬件虚拟化技术. 典型代表: Xen 全虚拟化: 必须cpu支持硬件虚拟化. 客户机器不需要修改内核,原则上可以安装任何的操作系统. Intel # cat

openstack 快照分析

1.  snapshot overview 对openstack而言,虚拟机的快照即是镜像,快照做完后以镜像形式存于glance.虽然openstack的快照是基于libvirt(qemu-kvm),但是二者在实现上有很大区别: libvirt 主流快照实现: 采用virDomainSnapshotCreateXML()函数(CLI为virsh snapshot-create). 新建的快照与虚拟机有关联:若为内置快照,快照信息和虚拟机存在同一个qcow2镜像中:若为外置快照,新建一个qcow2

virt-install参数详解

virt-install:安装虚拟机 SYNOPSIS virt-install [OPTION]... DESCRIPTION virt-install 是一个安装虚拟机的工具,支持KVM, Xen和使用"libvirt" hypervisor来管理的虚拟机容器. 支持通过VNC.SPICE图形界面和文本模式安装虚拟机. 支持通过本地或者远程NFS,HTTP,FTP或者PXE来安装虚拟机. 先来看一些例子: Install a Fedora 13 KVM guest, with vi

用case语句建立一个shell(功能是打开,关闭,重置,显示虚拟机)

#!/bin/bash case "$1" in start)                   /*当关键字为start时,执行打开虚拟机$2的语句 echo start $2 ... virsh start $2 &> /dev/null ;; poweroff)              /*当关键字为poweroff时,执行强制关闭虚拟机$2的语句 echo poweroff $2 ... virsh destroy $2 &> /dev/null

KVM虚拟化搭建及其KVM中LVM扩容

KVM虚拟化搭建及其KVM中LVM扩容 前言: 公司项目方最近有两台物理服务器系统分别为CentOS 7.2.需要部署KVM虚拟化,第一台服务器A需要虚拟出三台虚拟机(均为CentOS 7.2系统),服务器B上需要虚拟出三台CentOS 7.2 系统和两台windows server2012 系统.其中六台CentOS 7.2 虚拟机需要部署我们自己的平台,两台windows分别部署客户的平台.下面来为大家说一下KVM虚拟化的搭建,以及创建虚拟机的两种办法,及其操作使用.     KVM小课补:

LINUX REDHAT第十五单元文档

系统虚拟机管理 ####1.安装#### #!/bin/bash            ##命令运行环境的指定virt-install \            ##安装虚拟机--name $1 \            ##虚拟机名称指定,$1表示脚本后的第一串字符--ram 1000 \            ##内存--file /var/lib/libvirt/images/$1.img \        ##硬盘文件    --file-size 8 \               

Linux虚拟机脚本

#!/bin/bashcase "$1" in        start)        virsh start hello                       / 开启虚拟机holle        ;;        poweroff)        virsh destroy hello                    / 关闭虚拟机holle        ;;        reboot)        virsh destroy hello          

远程进行kvm虚拟机的安装

使用如下环境,实测通过: virsh version: Compiled against library: libvirt 1.1.1 Using library: libvirt 1.1.1 Using API: QEMU 1.1.1 Running hypervisor: QEMU 1.5.3 OS version: Red Hat Enterprise Linux Server release 7.1 (Maipo) 网络构成: PC --->  踏台服务器(10.167.14.102) 

虚拟机的管理

系统虚拟机管理 1.安装 #!/bin/bash                                                ##命令运行环境的指定virt-install \                                             ##安装虚拟机--name $1 \                                                ##虚拟机名称指定,$1表示脚本后的第一串字符--ram 1000 \