Jump to: navigation, search

Difference between revisions of "LibvirtXMLCPUModel"

Line 23: Line 23:
 
eg1
 
eg1
  
*
+
 
 
<pre><nowiki>
 
<pre><nowiki>
 
  libvirt_cpu_mode = host-model
 
  libvirt_cpu_mode = host-model
Line 31: Line 31:
 
eg2
 
eg2
  
*
+
 
 
<pre><nowiki>
 
<pre><nowiki>
 
  libvirt_cpu_mode = custom
 
  libvirt_cpu_mode = custom
Line 40: Line 40:
 
The tradeoff the host administrator has to make here is between ensuring maximum migration compatibility vs maximum guest performance. The host admin may well be fairly conservative in their choice of CPU model to maximum migration compatibility by default. With this in mind it is further desirable that individual disk images can be tagged with an alternative CPU model, which will override the host default. This should be achieved by using the same two configuration parameters described for the host, but passing them to glance as metadata properties when registering the image. For example
 
The tradeoff the host administrator has to make here is between ensuring maximum migration compatibility vs maximum guest performance. The host admin may well be fairly conservative in their choice of CPU model to maximum migration compatibility by default. With this in mind it is further desirable that individual disk images can be tagged with an alternative CPU model, which will override the host default. This should be achieved by using the same two configuration parameters described for the host, but passing them to glance as metadata properties when registering the image. For example
  
*
+
 
 
<pre><nowiki>
 
<pre><nowiki>
 
# glance add container_format=ovf \
 
# glance add container_format=ovf \

Revision as of 11:44, 14 June 2012

The libvirt driver in Essex and earlier uses the Cheetah templating system when generating XML for guests. This had many downsides (see http://wiki.openstack.org/LibvirtXMLConfigAPIs), but the one positive thing was that it was possible for end users deploying Nova to customize the XML to add features not officially supported. One of the most important missing features was the ability to configure the CPU model exposed to KVM virtual machines. There are a couple of reasons for wanting to specify the CPU model

  • To maximise performance of virtual machines by exposing new host CPU features to the guest
  • To ensure a consistent default CPU across all machines, removing reliance of variable QEMU defaults.

In libvirt, the CPU is specified by providing a base CPU model name (which is a shorthand for a set of feature flags), a set of additional feature flags, and the topology (sockets/cores/threads). The libvirt KVM driver provides a number of standard CPU model names (defined in /usr/share/libvirt/cpu_map.xml):

  • "486", "pentium", "pentium2", "pentiumpro", "coreduo", "n270", "pentiumpro", "qemu32", "kvm32", "cpu64-rhel5", "cpu64-rhel5", "kvm64", "pentiumpro", "Conroe" "Penryn", "Nehalem", "Westmere", "pentiumpro", "cpu64-rhel5", "cpu64-rhel5", "Opteron_G1", "Opteron_G2", "Opteron_G3, "Opteron_G4"

It is also possible to request the host CPU model in two ways

  • "host-model" - this causes libvirt to identify the named CPU model which most closely matches the host from the above list, and then request additional CPU flags to complete the match. This should give close to maximum functionality/performance, which maintaining good reliability/compatibility if the guest is migrated to another host with slightly different host CPUs.
  • "host-passthrough" - this causes libvirt to tell KVM to passthrough the host CPU with no modifications. The difference to host-model, instead of just matching feature flags, every last detail of the host CPU is matched. This gives absolutely best performance, and can be important to some apps which check low level CPU details, but it comes at a cost wrt migration. The guest can only be migrated to an exactly matching host CPU.

The range of libvirt functionality with regards to CPU models is quite broad and described in http://berrange.com/posts/2010/02/15/guest-cpu-model-configuration-in-libvirt-with-qemukvm/. The 'cpu_compare' method in nova.virt.libvirt.connection already deals with checking CPU compatibility between hosts, to allow the schedular to ensure correct placement of guests during migration. So the primary missing feature for Nova is simply the ability to configure the guest CPU mode + model

For the most part it will be sufficient for the host administrator to specify the guest CPU config in the per-host configuration file (/etc/nova/nova.conf). This will be achieved by introducing two new configuration parameters

  • libvirt_cpu_mode = custom|host-model|host-passthrough
  • libvirt_cpu_model = ...one of the named models from /usr/share/libvirt/cpu_map.xml.... (This param is only valid if libvirt_cpu_mode=custom)

eg1


 libvirt_cpu_mode = host-model


eg2


 libvirt_cpu_mode = custom
 libvirt_cpu_model = Opteron_G3


The tradeoff the host administrator has to make here is between ensuring maximum migration compatibility vs maximum guest performance. The host admin may well be fairly conservative in their choice of CPU model to maximum migration compatibility by default. With this in mind it is further desirable that individual disk images can be tagged with an alternative CPU model, which will override the host default. This should be achieved by using the same two configuration parameters described for the host, but passing them to glance as metadata properties when registering the image. For example


# glance add container_format=ovf \
    libvirt_cpu_mode=custom
    libvirt_cpu_model=Opteron_G4 \
    name=f16-more is_public=true \
    disk_format=qcow2 < /path/to/disk