Transforming Xen guests into KVM guests
We have some Xen systems. They're configured in a certain way. We want to transfer the Xen guests to a KVM host, which we want configured in a slightly different way.
This page documents that process. it's a work in progress.
Shorthand
$xenhost
-- this is the dom0 running the xen setup we are transferring from$kvmhost
-- this is the machine we are transferring to (note that this is not the same machine as$xenhost
, though such a transfer might be possible for some people; that's not what we're working with, so some choices here might be different)$guest
-- this is the name of the guest being transferred$xenguest
-- this refers to the copy of$guest
when it's still running on$xenhost
$kvmguest
-- this refers to the copy of$guest
that will be running on$kvmhost
Differences
Here are the main ways that the two virtualization schemes differ for us:
Disk differences
Our Xen setup (via xen-tools) has $xenhost
serving $guest
its partitions as individual volumes. So $xenhost
has them broken out in LVM directly, and $guest
doesn't use LVM at all, or even see a parent /dev/sda
-- it only sees /dev/sda1
,/dev/sda2
, etc.
Our KVM setup typically has $kvmhost
serving a single volume to $guest
, and $guest
carves it up however it sees fit.
Booting differences
Our Xen setup has $xenhost
hold a kernel and initrd outside of $guest
, and boots $guest
into those systems directly (no bootloader, no "BIOS"-level disk access at boot).
Our KVM setup has $kvmhost
pass the virtual disk to $guest
, which uses an emulated BIOS to pull a bootloader off the disk, and start from there.
Architecture differences
All our KVM hosts are amd64. Most of our remaining Xen hosts (and guests) are 32-bit (686). We'd like all the guests to end up 64-bit clean eventually, but a non-invasive transfer that is relatively opaque to the guest might be preferable, even if it means the guest doesn't move to 64-bits immediately.
Strategies
There are two main strategies we can use for transfer, which are rather different from one another:
Data synchronization
In this approach, we'd set up $kvmguest
initially with a new IP address, as a clean minimal install.
- synchronize packages (at the same versions) by copying the dpkg selections from
$xenguest
- synchronize user data via rsync
- synchronize system configuration (except networking, maybe?)
- halt public-facing services on
$xenguest
- touch up synchronization with another rsync run
- dump databases on
$xenguest
, restore them on$kvmguest
- halt
$xenguest
- modify networking on
$kvmguest
to match the old$xenguest
networking config - restart services on
$kvmguest
(or just restart$kvmguest
entirely)
Data synchronization advantages
$kvmguest
will be fully 64-bit from the start$kvmguest
will be configured/arranged pretty closely to how we have all the other guests- Should be "just like" a restore from backup -- good practice for recovery?
Block device mirroring
In this approach, we'd approximate a direct transfer of disks from $xenhost
to $kvmhost
and try to disturb $guest
as little as possible.
- get a list of block devices for
$xenguest
, and construct a parent LV on$kvmhost
large enough to fit them all. - Use
parted
to carve up the device to match the filesystems currently exported to$xenguest
Usekpartx
to expose them to$kvmhost
directly - on
$xenguest
:- go ahead and install packages for a normal kernel and a bootloader (
linux-image-2.6-amd64
andgrub
, presumably); configure the bootloader to use the serial console. - ensure that
/etc/fstab
(and other config) refer to filesystems by UUID where possible
- go ahead and install packages for a normal kernel and a bootloader (
- rsync the virtual devices from
$xenhost
to$kvmhost
- shutdown
$xenguest
- touch up rsync
- on
$kvmhost
, extract kernel and initrd from the filesystem - boot
$kvmguest
with kernel + initrd directly into single user mode (using-kernel
,-append
, and-initrd
arguments tokvm
) in order to:rm -f /boot/grub/device.map grub-install /dev/vda update-grub
- restart
$kvmguest
"normally" (from emulated BIOS)
Block device mirroring advantages
- guest seems to change relatively little -- main changes (installation of kernel + bootloader packages and reconfiguration of
/etc/fstab
) can be tested while still running on$xenhost
- user data goes through fewer transformations
- no finicky network config shuffling, only one host active at once.