Xen makes it possible to run several Linux systems on one physical machine. The hardware for the different systems is provided virtually. This chapter gives an overview of the possibilities and limitations of this technology. Sections about installing, configuring, and running Xen complete this introduction.
Virtual machines commonly need to emulate the hardware a system needs. The disadvantage is that the emulated hardware is much slower than the real silicon. Xen has a different approach. It restricts emulation to as few parts as possible. To achieve this, Xen uses paravirtualization. This is a technique that presents virtual machines similarly, but not identically to the underlying hardware. Therefore, host and guest operating systems are adapted on kernel level. The user space remains unchanged. Xen controls the hardware with a hypervisor and a controlling guest, also called Domain-0. These provide all needed virtualized block and network devices. The guest systems use these virtual block and network devices to run the system and connect to other guests or the local network. When several physical machines running Xen are configured in a way that the virtual block and network devices are available, it is also possible to migrate a guest system from one piece of hardware to another while running. Originally, Xen was developed to run up to 100 guest systems on one computer, but this number depends strongly on the system requirements of the running guest systems, especially the memory consumption.
To limit the CPU utilization, the Xen hypervisor offers three different schedulers. The scheduler also may be changed while running the guest system, making it is possible to change the priority of the running guest system. On a higher level, migrating a guest may also be used to adjust the available CPU power.
The Xen virtualization system also has some drawbacks regarding the supported hardware. Several closed source drivers, like those from Nvidia or ATI, do not work as expected. In these cases, you must use the open source drivers if available, even if they do not support the full capabilities of the chips. Also several WLAN chips and Cardbus bridges are not supported when using Xen. In version 2, Xen does not support PAE (physical address extension), which means that it does not support more than 4 GB of memory. ACPI is not supported. Power management and other modes that depend on ACPI do not work. Another limitation of Xen is that it is currently not possible to just boot a block device. To boot, it is always necessary to have the correct kernel and initrd available in Domain-0.
The installation procedure of Xen involves the setup of a Domain-0
domain and the installation of Xen guests. First, make sure that the
needed packages are installed. These are
xen-tools-ioemu, and a
kernel-xen package. When selecting Xen during
installation, Xen is added to the GRUB configuration. For other
cases, make an entry in
boot/grub/menu.lst. This entry should be similar to
title Xen3 kernel (hd0,0)/boot/xen.gz module (hd0,0)/boot/vmlinuz-xen <parameters> module (hd0,0)/boot/initrd-xen
Replace (hd0,0) with the partition that holds your
See also Chapter 9, The Boot Loader. Replace <parameters>
with the parameters normally used to boot a Linux kernel.
Then reboot into Xen mode. This boots the Xen
hypervisor and a slightly changed Linux kernel as Domain-0 that
runs most of the hardware. Apart from the exceptions already mentioned,
everything should work as normal.