This editorial has been written by Anthony Smith from Inception Hosting. Thank you, Anthony, for your valued contribution to LowEndBox!
This overview is intended to be just that, this is just the first installment and is not intended as a fine detail technical paper simply a no nonsense overview to help in your decision making.
While this may be old news some some the subject comes up time and time again, what is the difference, which one is faster, which one should I use and so on so initially let’s start with a non exhaustive comparison table:
KVM is full hardware virtualisation, you can run almost any operating system as a guest BSD/Windows/Linux and with virtio driver you will get near native performance, some experiments have shown only 3% loss on native hardware under ideal circumstances.
It does support installation from ISO and also template based installation, it comes with good separation in terms of privacy, it can suffer I/O lag under heavy load which impacts the guest operating systems and the host operating system.
Each guest (VM) runs as a process on the host node while this is great for discovering which guest is causing issues when required it can also cause problems if the host is under heavy load all guests suffer.
You can over allocate Ram with little effort however in most cases unless the host is full SSD this is unlikely to happen due to the overhead it puts on the host node and subsequent and obvious performance issues.
Due to KVM being native in most modern kernels it does lend a performance advantage over others in some circumstances and is still fairly new and under very active development.
Most people select KVM for excellent performance and flexibility although perhaps not quite as stable as Xen due to maturity.
Xen comes in 2 flavors but can run simultaneously on the same physical host, Xen PV (paravirtualisation) and HVM (full hardware virtualisation)
Xen PV guests (in the hosting industry) tend to be template based for repid deployment and snappy performance, you can run your own kernel in Xen PV and this is pretty much default these days, you can only run Linux on Xen PV (BSD with additional configuration is possible but not common).
Xen HVM runs much like KVM it has better drivers for Linux based distributions as PV has been available by default since around 2006 in most kernels so you do not need to install virtio for a performance boost however NetBSD and windows perform poorly on Xen HVM compared to KVM, while you can over come this to some degree on Windows with Xen PV drivers for Windows it does not run as well as KVM out of the box so to speak.
Xen is quite old now and very mature, most people select Xen for good performance with exceptional stability.
Xen hosts will usually pre-allocate Ram and CPU cores to the xen hypervisor so it has its own dedicated resources that guests cannot impact on to achieve stability.
OpenVZ is hugely popular in the hosting industry due to its rapid deployment and very high density, it achieves this as the host kernel is shared with the guests along with ram, cpu and disk, with fairly basic separation between guest and host the I/O bottleneck is almost none existent.
In terms of disk access speed and disk latency OpenVZ is a clear winner when compared to KVM and Xen however this comes at the cost of lack of separation in terms of privacy and also in terms of how much impact 1 guest OS can have on both the host node and other guests, all individual processes are visible to the host node and you cannot encrypt your data.
OpenVZ supports Linux only (unless using the commercial parallels which will support windows in a fashion)
OpenVZ can also be nested inside Xen or KVM to achieve even greater density, due to the volume of containers you can run on a single host node this keeps the price of OpenVZ much more competitive than KVM and Xen.
In the next articles benchmarking of all 3 will be done to attempt to show some real world differences between the 3 technologies showing both guest performance and host node impacts.