How to migrate from VMware and Hyper-V to OpenStack

http://superuser.openstack.org/articles/how-to-migrate-from-vmware-and-hyper-v-to-openstack

Introduction

I migrated >120 VMware virtual machines (Linux and Windows) from VMware ESXi to OpenStack. In a lab environment I also migrated from Hyper-V with these steps. Unfortunately, I am not allowed to publish the script files I used for this migration, but I can publish the steps and commands that I used to migrate the virtual machines. With the steps and commands, it should be easy to create scripts that do the migration automatically.

Just to make it clear, these steps do not convert traditional (non-cloud) applications to cloud-ready applications. In this case we started to use OpenStack as a traditional hypervisor infrastructure.

Update: The newer versions of libguestfs-tools and qemu-img convert handle VMDK files very well (I had some issues with older versions of the tools), so the migration can be more efficient. I removed the conversion steps from VMDK to VMDK (single file) and from VMDK to RAW. The migration speed will be doubled by reducing these steps.   Disclaimer: This information is provided as-is. I will decline any responsibility caused by or with these steps and/or commands. I suggest you don’t try and/or test these commands in a production environment. Some commands are very powerful and can destroy configurations and data in Ceph and OpenStack. So always use this information with care and responsibly.

Global steps

  1. Inject VirtIO drivers
  2. Expand partitions (optional)
  3. Customize the virtual machine (optional)
  4. Create Cinder volumes
  5. Convert VMDK to Ceph
  6. Create Neutron port (optional)
  7. Create and boot instance in OpenStack

Specifications

Here are the specifications of the infrastructure I used for the migration:

  • Cloud platform: OpenStack Icehouse
  • Cloud storage: Ceph
  • Windows instances: Windows Server 2003 to 2012R2 (all versions, except Itanium)
  • Linux instances: RHEL5/6/7, SLES, Debian and Ubuntu
  • Only VMDK files from ESXi can be converted, I was not able to convert VMDK files from VMware Player with qemu-img
  • I have no migration experience with encrypted source disks
  • OpenStack provides VirtIO paravirtual hardware to instances

Requirements

A Linux ‘migration node’ (tested with Ubuntu 14.04/15.04, RHEL6, Fedora 19-21) with:

  • Operating system (successfully tested with the following):
  • RHEL6 (RHEL7 did not have the “libguestfs-winsupport” -necessary for NTFS formatted disks- package available at the time of writing)
  • Fedora 19, 20 and 21
  • Ubuntu 14.04 and 15.04
  • Network connections to a running OpenStack environment (duh). Preferable not over the internet, as we need ‘super admin’ permissions. Local network connections are usually faster than connections over the internet.
  • Enough hardware power to convert disks and run instances in KVM (sizing depends on the instances you want to migrate in a certain amount of time).

We used a server with 8x Intel Xeon E3-1230 @ 3.3GHz, 32GB RAM, 8x 1TB SSD and we managed to migrate >500GB per hour. However, it really depends on the usage of the disk space of the instances. But also my old company laptop (Core i5 and 4GB of RAM and an old 4500rmp HDD) worked, but obviously the performance was very poor.

  • Local sudo (root) permissions on the Linux migration node
  • QEMU/KVM host
  • Permissions to OpenStack (via Keystone)
  • Permissions to Ceph
  • Unlimited network access to the OpenStack API and Ceph (I have not figured out the network ports that are necessary)
  • VirtIO drivers (downloadable from Red Hat, Fedora, and more)
  • Packages (all packages should be in the default distributions repository):

“python-cinderclient” (to control volumes)

“python-keystoneclient” (for authentication to OpenStack)

“python-novaclient” (to control instances)

“python-neutronclient” (to control networks)

“python-httplib2” (to be able to communicate with web service)

“libguestfs-tools” (to access the disk files)

“libguestfs-winsupport” (should be separately installed on RHEL based systems only)

“libvirt-client” (to control KVM)

“qemu-img” (to convert disk files)

“ceph” (to import virtual disk into Ceph)

“vmware-vdiskmanager” (to expand VMDK disks, downloadable from VMware)

Steps

1. Inject VirtIO drivers

1.1 Windows Server 2012

Since Windows Server 2012 and Windows 8.0, the driver store is protected by Windows. It is very hard to inject drivers in an offline Windows disk. Windows Server 2012 does not boot from VirtIO hardware by default. So, I took these next steps to install the VirtIO drivers into Windows. Note that these steps should work for all tested Windows versions (2003/2008/2012).

  1. Create a new KVM instance. Make sure the Windows vmdk disk is created as IDE disk! The network card should be a VirtIO device.
  2. Add an extra VirtIO disk, so Windows can install the VirtIO drivers.
  3. Off course you should add a VirtIO ISO or floppy drive which contains the drivers. You could also inject the driver files with virt-copy-in and inject the necessary registry settings (see  paragraph 4.4) for automatic installation of the drivers.
  4. Start the virtual machine and give Windows about two minutes to find the new VirtIO hardware. Install the drivers for all newly found hardware. Verify that there are no devices that have no driver installed.
  5. Shutdown the system and remove the extra VirtIO disk.
  6. Redefine the Windows vmdk disk as VirtIO disk (this was IDE) and start the instance. It should now boot without problems. Shut down the virtual machine.

1.2 Linux (kernel 2.6.25 and above)

Linux kernels 2.6.25 and above have already built-in support for VirtIO hardware. So there is no need to inject VirtIO drivers. Create and start a new KVM virtual machine with VirtIO hardware. When LVM partitions do not mount automatically, run this to fix:

(log in)

mount -o remount,rw /

pvscan

vgscan

reboot

(after the reboot all LVM partitions should be mounted and Linux should boot fine)

Shut down the virtual machine when done.

1.3 Linux (kernel older than 2.6.25)

Some Linux distributions provide VirtIO modules for older kernel versions. Some examples:

  • Red Hat provides VirtIO support for RHEL 3.9 and up
  • SuSe provides VirtIO support for SLES 10 SP3 and up

The steps for older kernels are:

  1. Create KVM instance:
  2. Linux (prior to kernel 2.6.25): Create and boot KVM instance with IDE hardware (this is limited to 4 disks in KVM, as only one IDE controller can be configured which results in 4 disks!). I have not tried SCSI or SATA as I only had old Linux machines with no more than 4 disks. Linux should start without issues.
  3. Load the virtio modules (this is distribution specific): RHEL (older versions):https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/ch10s04.html) and for SLES 10 SP3 systems:https://www.suse.com/documentation/opensuse121/book_kvm/data/app_kvm_virtio_install.htm
  4. Shutdown the instance.
  5. Change all disks to VirtIO disks and boot the instance. It should now boot without problems.
  6. Shut down the virtual machine when done.

For Red Hat, see: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/ch10s04.html

For SuSe, see:https://www.suse.com/documentation/opensuse121/book_kvm/data/app_kvm_virtio_install.htm

1.4 Windows Server 2008 (and older versions); deprecated

For Windows versions prior to 2012 you could also use these steps to insert the drivers (the steps in 4.1 should also work for Windows 2003/2008).

  1. Copy all VirtIO driver files (from the downloaded VirtIO drivers) of the corresponding Windows version and architecture to C:\Drivers\. You can use the tool virt-copy-in to copy files and folders into the virtual disk.
  2. Copy *.sys files to %WINDIR%\system32\drivers\ (you may want to use virt-ls to look for the correct directory. Note that Windows is not very consistent with lower and upper case characters). You can use the tool virt-copy-in to copy files and folders into the virtual disk.
  3. The Windows registry should combine the hardware ID’s and drivers, but there are no VirtIO drivers installed in Windows by default. So we need to do this by ourselves. You could inject the registry file with virt-win-reg. If you choose to copy all VirtIO drivers to an other location than C:\Drivers, you must change the “DevicePath” variable in the last line (the most easy way is to change it in some Windows machine and then export the registry file, and use that line).

Registry file (I called the file mergeviostor.reg, as it holds the VirtIO storage information only):

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1001&subsys_00000000]
 "ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
 "Service"="viostor"

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1001&subsys_00020000]
 "ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
 "Service"="viostor"

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1001&subsys_00021AF4]
 "ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
 "Service"="viostor"

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1001&subsys_00021AF4&rev_00]
 "ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
 "Service"="viostor"

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1004&subsys_00081af&rev_00]
 "ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
 "Service"="viostor"

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\viostor]
 "ErrorControl"=dword:00000001
 "Group"="SCSI miniport"
 "Start"=dword:00000000
 "Tag"=dword:00000021
 "Type"=dword:00000001
 "ImagePath"="system32\\drivers\\viostor.sys"

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion]
 "DevicePath"=hex(2):25,00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,74,00,25,00,5c,00,69,00,6e,00,66,00,3b,00,63,00,3a,00,5c,00,44,00,72,00,69,00,76,00,65,00,72,00,73,00,00,00

When these steps have been executed, Windows should boot from VirtIO disks without BSOD. Also all other drivers (network, balloon etc.) should install automatically when Windows boots.

See: https://support.microsoft.com/en-us/kb/314082  (written for Windows XP, but it is still usable for Windows 2003 and 2008).

See also: http://libguestfs.org/virt-copy-in.1.html and http://libguestfs.org/virt-win-reg.1.html

2. Expand partitions (optional)

Some Windows servers I migrated had limited free disk space on the Windows partition. There was not enough space to install new management applications. So, I used the vmware-vdiskmanager tool with the ‘-x’ argument (available from VMware.com) to increase the disk size. You then still need to expand the partition from the operating system. You can do that while customizing the virtual machine in the next step.

3. Customize the virtual machine (optional)

To prepare the operating system to run in OpenStack, you probably would like to uninstall some software (like VMware Tools and drivers), change passwords and install new management tooling etc.. You can automate this by writing a script that does this for you (those scripts are beyond the scope of this article). You should be able to inject the script and files with the virt-copy-in command into the virtual disk.

3.1 Automatically start scripts in Linux

I started the scripts within Linux manually as I only had a few Linux servers to migrate. I guess Linux engineers should be able to completely automate this.

3.2 Automatically start scripts in Windows

I choose the RunOnce method to start scripts at Windows boot as it works on all versions of Windows that I had to migrate. You can put a script in the RunOnce by injecting a registry file. RunOnce scripts are only run when a user has logged in. So, you should also inject a Windows administrator UserName, Password and set AutoAdminLogon to ‘1’. When Windows starts, it will automatically log in as the defined user. Make sure to shut down the virtual machine when done.

Example registry file to auto login into Windows (with user ‘Administrator’ and password ‘Password’) and start the C:\StartupWinScript.vbs.:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnce]
 "Script"="cscript C:\\StartupWinScript.vbs"
 "Parameters"=""

[HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon]
 "AutoAdminLogon"="1"
 "UserName"="Administrator"
 "Password"="Password"

4. Create Cinder volumes

For every disk you want to import, you need to create a Cinder volume. The volume size in the Cinder command does not really matter, as we remove (and recreate with the import) the Ceph device in the next step. We create the cinder volume only to create the link between Cinder and Ceph.

Nevertheless, you should keep the volume size the same as the disk you are planning to import. This is useful for the overview in the OpenStack dashboard (Horizon).

You create a cinder volume with the following command (the size is in GB, you can check the available volume types by cinder type-list):

cinder create --display-name <name_of_disk> <size> --volume-type <volumetype>

Note the volume id (you can also find the volume id with the following command) as we need the ids in the next step.

cinder list | grep <name_of_disk>

Cinder command information: http://docs.openstack.org/cli-reference/content/cinderclient_commands.html

5. Convert  VMDK to Ceph

As soon as the Cinder volumes are created, we can convert the VMDK disk files to RBD blocks (Ceph). But first we need to remove the actual Ceph disk. Make sure you remove the correct Ceph block device!

In the first place you should know in which Ceph pool the disk resides. Then remove the volume from Ceph (the volume-id is the volume id that you noted in the previous step ‘Create Cinder volumes’):

rbd -p <ceph_pool> rm volume-<volume-id>

Next step is to convert the VMDK file into the volume on Ceph (all ceph* arguments will result in better performance. The vmdk_disk_file variable is the complete path to the vmdk file. The volume-id is the ID that you noted before).

qemu-img convert -p <vmdk_disk_file> -O rbd rbd:<ceph_pool>/volume-<volume-id>

Do this for all virtual disks of the virtual machine.

Be careful! The rbd command is VERY powerful (you could destroy more data on Ceph than intended)!

6. Create Neutron port (optional)

In some cases you might want to set a fixed IP-address or a MAC-address. You can do that by create a port with neutron and use that port in the next step (create and boot instance in OpenStack).

You should first know what the network_name is (nova net-list), you need the ‘Label’. Only the network_name is mandatory. You could also add security groups by adding

 --security-group <security_group_name>

Add this parameter for each security group, so if you want to add i.e. 6 security-groups, you should add this parameter 6 times.

neutron port-create --fixed-ip ip_address=<ip_address> --mac-address <mac_address> <network_name> --name <port_name>

Note the id of the neutron port, you will need it in the next step.

7. Create and boot instance in OpenStack 

Now we have everything prepared to create an instance from the Cinder volumes and an optional neutron port.

Note the volume-id of the boot disk.

Now you only need to know the id of the flavor you want to choose. Run nova flavor-list to get the flavor-id of the desired flavor.

Now you can create and boot the new instance:

nova boot <instance_name> --flavor <flavor_id> --boot-volume <boot_volume_id> --nic port-id=<neutron_port_id>

Note the Instance ID. Now, add each other disk of the instance by executing this command (if there are other volumes you want to add):

nova volume-attach <instance_ID> <volume_id>

This post first appeared on Nathan Portegijs‘ blog. Superuser is always interested in how-tos and other contributions, please get in touch: [email protected]

Cover Photo by Clement127 // CC BY NC

时间: 2024-08-09 21:46:15

How to migrate from VMware and Hyper-V to OpenStack的相关文章

win8/win10 自带Hyper V虚拟机

为什么是hyperV而不是vmware workstation或者virturalBox? 萝卜白菜,各有所爱.这里不比较数据,不深究技术,我选择的理由很简单:系统自带,不用安装额外的软件,而且性能也还可以. hyperV最早集成于win8中,win7及更老版本是没有此功能的.打开"任务管理器",在"性能"选项卡"虚拟化"中可到启用状态.可在BIOS设备.安全或CPU选项卡中找到虚拟化选项. BIOS中开启硬件支持后,可在"添加删除程序

Hyper - V (四)

安装虚拟机 新建虚拟机 为新建的虚拟机起名,默认保存路径为前面设置的默认路径 指定虚拟机内存大小 指定虚拟机网卡连接到外部网络还是内部网络(或专用网络) 创建虚拟硬盘,指定硬盘存储路径及硬盘大小 选择安装文件的引导路径,支持光盘安装,ISO安装等方式. 这里我们选择通过光驱引导的方式来安装系统 完成虚拟机设置. 右键点击新建的虚拟机,设置可以更改虚拟机的配置选项. 将ISO文件加载到虚拟机中,即可以实现光盘引导功能. 启动虚拟机-- 安装系统 Hyper - V (四),布布扣,bubuko.c

Hyper - V (三)

创建内部网络及专用网络 内部网络:不与外部通讯的网络,仅与物理机及虚拟机之间通讯. 专用网络:不与外部及物理机通讯的网络,仅支持虚拟机之间的通讯. 添加内部网络 单击虚拟网络管理器 2. 点击新建虚拟网络 -- 内部 -- 添加 3. 此时将新建立一个内部的虚拟网卡 在添加内网通信的IP地址即可. Hyper - V (三),布布扣,bubuko.com

Hyper v 单网卡 外部网络

先说一下环境: WIN 8.1 单网卡(有 无线 和 有线网卡, 但是没有多余的网络接口可插,还是等于单网卡) Hyper V 有3种虚拟交换机类型: 专用 / 内部 / 外部 各有各的用处, 我理解也不多,不多嘴误导大家. 今天说这个,是因为要做CSS和JS兼容调试, 开发用的都是IE11,用IE11的调试工具将文档模式调为 IE8 ,发现 jQuery.Validation 不能常运行. 但是用虚拟机装的 XP上直接用IE8 ,却没有任何问题,真的很蛋疼. 工作用的有两台电脑,一台装的是WI

Hyper - V (六)安装Hyper - V系统集成服务

安装Hyper - V系统集成服务 作用: 操作系统关闭 -- 当物理机关机时,Hyper - V 上的虚拟机将先于物理机关机.如不安装此服务,虚拟机将不会自动关机而造成类似于直接断电. 时间同步 -- 即虚拟机与物理机的时间同步 数据交换 -- 物理机可以查看到虚拟机的相关信息 ,如计算机名等 检测信号 -- 当虚拟机假死或无响应状态时,虚拟机会发送重启等信号 备份(卷快照) -- 开启备份功能 如何安装: 点击操作 --  插入集成服务安装盘 点击安装即可,安装完毕后重启 Hyper - V

安装配置 HYPER V Core SERVER 的基本命令 - Hyper V 2012 R2

安装配置 HYPER V Core SERVER 的基本命令 - Hyper V 2012 R2 1. 基本命令 安装集群故障转移角色 Install-WindowsFeature -name Failover-Clustering -IncludeManagementTools 关闭防火墙 netsh advfirewall set allprofiles state off 关闭IPv6 New-ItemProperty -Path HKLM:\SYSTEM\CurrentControlSe

Hyper V server 2012 r2 常用的powershell命令

Hyper V server 2012 r2 常用的powershell命令 ? 获取网卡信息 Get-netadapterGet-netipinterface ? 设置DNS地址Set-DnsClientServerAddress -InterfaceAlias NIC NAME -ServerAddresses "1.1.1.1","2.2.2.2"注释:NIC Name: 网卡名称, 1.1.1.1,2.2.2.2 ? 网卡聚合NIC Teaming: New

自带hyper -v 或者 Vmware安装Linux centos

centos系统存在网盘,链接: https://pan.baidu.com/s/1A5ywyLjIegcftaT_xCvPbA 密码: n6v4 https://blog.csdn.net/nancy_2/article/details/78942305 https://blog.csdn.net/m0_37835884/article/details/79484242 原文地址:https://www.cnblogs.com/lgdafeng/p/8931100.html

Windows 2012 Hyper &ndash;V 3.0 New Functions

1-Hyper –V 复制 Hyper-V 3.0提供的复制特性,允许管理员为现有的虚拟机创建副本,提供了一种简单而实用的故障转移和灾难恢复的方案 防火墙允许 HV2 启用复制,指定HV1 ,复制到HV1 察看复制状态 故障转移方式: 在HV2 上 ,选择计划故障转移 在HV1上也可以选择即时的故障转移 反向复制: 就是HV1 作为主服务器了,HV2 作了副本服务器 Hyper-V 不仅仅给用户提供了一种简单高效的灾难恢复方案,同时在安全性上支持证书身份验证,在副本初始化时,也提供了离线副本的选