Jump to: navigation, search

VirtDriverGuestCPUTopology

Revision as of 11:08, 19 November 2013 by Daniel Berrange (talk | contribs) (Created page with "= Virtualization Driver Guest CPU Topology = == Background problem == Each virtualization driver in OpenStack has its own approach to defining the CPU topology seen by guest...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Virtualization Driver Guest CPU Topology

Background problem

Each virtualization driver in OpenStack has its own approach to defining the CPU topology seen by guest virtual machines. The libvirt driver will expose all vCPUs as individual sockets, with 1 core and no hyper-threads.

UNIX operating systems will happily use any CPU topology that is exposed to them within a upper bound on the total number of logical CPUs. That said there can be performance implications from choosing different topologies. For example, 2 hyper-threads are usually not equivalent in performance to 2 cores or sockets, and as such operating system schedulers have special logic to deal with task placement. So if a host has a CPU with 2 cores with 2 threads, and two tasks to run, it will try to place them on different cores, rather than in different threads within a core. It follows that if a guest is shown 4 sockets, the operating system will not be making optimal scheduler placement decisions to avoid competing for constrained thread resources.

Windows operating systems meanwhile are more restrictive in the CPU topology they are willing to use. In particular some versions will have restrictions on the number of sockets they are prepared to use. So if a OS is limited to using 4 sockets and a 8 vCPU guest is desired, then the hypervisor must ensure it exposes a topology with at least 2 cores per socket. Failure to do this will result in the guest refusing to use some of the vCPUs it is assigned.