[lug] virtualization first boot question
dan at usrsbin.com
Sat Sep 18 12:30:54 MDT 2010
On 9/18/2010 11:31 AM, karl horlen wrote:
> wow. your insightful post brought up more questions and revealed my fundamental lack of understanding of the basic paravirtual / HVM thing ;). i probably need to do some more investigating on my own but i'll take a few stabs at some inline questions below too.
> love those ringing endorsements ;) i personally appreciate info on the stuff that doesn't work as much as the stuff that does. it saves future headaches
I am just not much of a fan of Xenserver. It works great if you use an
OS that it FULLY supports. If you don't, then it's a real pain.
> the way i understand it, is these are generic tools (command line and or gui interfaces) that allow you to use a single set of commands to manage the same set of tasks for *different* virtual implementations (xen, kvm, virtual box, etc). an abstraction layer so to speak. correct?
IIRC, libvirt and virt-manager support KVM or Xen.
> are they that useful versus the stock commands provided by each implementation? my thought is that you're going to want to know the underlying implementation commands and procedures anyway, so you can understand what's going on under the hood to configure networking and troubleshoot when doodoo hits the fan. put another way, it sounds to me like these tools are really *additional* tools that come in handy but not exclusive as replacements for the underlying implementation knowledge. correct? (examples i'm thinking of might be like copying / creating new VMs based on a base img or something).
libvirt will abstract a lot of the underlaying system commands. If you
want to know more, then you will want to look at the man pages for
tunctl, brctl, qemu, ifconfig, iptables, and the xm command. I got
tired of running qemu manually so I wrote a python script that manages
VMs for me.
> do you mean in dom0 (the master host) or in all domu VMS (guests). i assumed the booted host kernel had to be xen aware but that you could install what ever you watned in a guest. sounds like you're saying no?
Both dom0 (the host OS) and the VM will have to run Xen aware kernels if
you want maximum performance. If the guest OS does not run a Xen
kernel, then Xen will default back to fully emulated HVM mode which is
slow (10-15% overhead).
> so i think what you're saying is HVM equals fully emulated mode. whereas paravirtulization allows the guest OS to somehow reach through to the underlying HW virtualization features of the cpu. in which case, paravirtualization should be more efficient / faster. correct?
Not completely, HVM mode in Xen is fully emulated and uses the HW
features of the CPU to speed it up slightly. The problem is that IO
functions like disk and network don't benefit from the CPU extensions.
With paravirtualization the VM can offload all of the IO functionality
to the host OS and the host OS will do all of the IO scheduling.
> i think this means that paravirtualization requires that each *guest* VM kernel be "smarter" and compiled for the underlying virtualization implementation (or is it just the cpu architecture in question) than a HVM guest kernel.. since the underlying hw is completely abstracted correct?
Paravirtualization is a technique where the host OS and the guest OS
will cooporate with each other to perform functions that are difficult
to emulate on the x86 CPU. When you run a VM in PV mode, the VM is much
smarter and knows that it is a VM. The guest OS has to be ported to the
hypervisor in order for PV to work.
> well based on what i said above, i totally got the paravirtualized thing wrong. i think HVM is generically known as paravirtualization? or perhaps i mixed the two up - got them backwards the way i described them.
You have them straight. Xen isn't the only VM technology that supports
Paravirtualization. VMWare, Virtualbox, and KVM also can do it. If you
go into the kernel source and do make menuconfig you will see an option
called Paravirt Ops. If you click that on, you can tell the kernel to
load support for a bunch of different hypervisors.
> let me rephrase. it's the xen aware host kernel that presides over the system when you boot it (replacing the non virtual kernel that used to boot) providing the capabilites / abstraction required to implement x number of VM guests.
> in other virtual implementations, there is no such thing as dom0, but the paradigm is the same. your newly installed virtualized *host* OS is going to be the one that boots your base physical server and allows you to create and run guest VMs on the system. right?
That's also correct. Xen and KVM have different things that have to be
in the kernel. Xen changes the kernel fundamentally at it's lowest
levels (which is why it's not in the official Linux kernel). KVM is 3
.ko files that you load with modprobe.
> it sounds like you favor KVM vs xen. any thoughts on virtualbox or vmware?
I prefer KVM over Xen. Xen changes the kernel fundamentally to deal
with its guests, and from what I gather, most of the kernel developers
despise it. KVM allows a user space program to use the processor VM
extensions. This means that the emulator has to be ported to use KVM.
After that, the kernel runs the VM as if it was just another process on
I also think that KVM is a lot easier to set up.
As far as the others...
I haven't really used VMWare all that much. I have used ESXi, which is
the baby version of the real thing and it worked all right.
Virtualbox is a completely different animal from Xen or KVM. Xen and
KVM are bare metal hypervisors (at least I think KVM is bare metal).
Virtualbox is hosted and runs on top of a booted OS. It needs the host
to provide video, keyboard, and mouse input. For virtualizing servers,
Virtualbox is not what you want. If you want to make a test/play
environment on your desktop then Virtualbox is nice. I use Virtualbox a
lot and I love it. I have never used VMWare Fusion or VMWare Server so
I can't compare. My boss has VMWare Fusion on his mac and he prefers
> thanks for the great input.
More information about the LUG