Jump to: navigation, search

VifPlugging

Revision as of 23:30, 17 February 2013 by Ryan Lane (talk | contribs) (Text replace - "__NOTOC__" to "")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

A proposed interface for VIF Plugging

  • There is a network driver class. We select an implementation of the class in config (e.g. Quantum, nova-network).
  • The class has a function to request a VIF endpoint on a specific network. This creates and configures the endpoint and returns a type describing its location. There is a corresponding function to tear down an endpoint.
  • This class is hypervisor agnostic and knows nothing about how to attach the VIF endpoint to a VM. It merely puts it in place for the hypervisor to make use of.
  • We negotiate for an endpoint based on, among other things, what the hypervisor driver can cope with. As well as the network to which the endpoint is attached, it describes what it wants of the endpoint. For instance, libvirt will say it prefers bridge network types, and the driver class will endeavour to provide one (or throw an error if it can't provide anything suitable). If I wanted instead to use a directmap PCI device, then I would make the request for one of those instead.
  • The driver returns details of the endpoint (the type created and the location - bridge with name, PCI device with address).
  • The hypervisor will receive the VIF endpoint type from above.
  • The hypervisor will receive one of the types it specifically negotiated for, and will know how to deal with all of the types it might receive and attach those to VMs.

Example control flow

  • we start a VM using libvirt on one network.
  • we ask for an endpoint on the given network, passing libvirt negotiation options.
  • the network driver returns an endpoint object suited to that purpose. It's a bridge network device.
  • the libvirt driver looks at the endpoint object, sees it represents a bridge device, and turns out an appropriate config block.
  • libvirt starts the VM.
  • the VM is migrated. On the new compute node, we ask for an endpoint, receive its object and libvirt attaches it to the VM. On the old machine, we destroy the endpoint we previously received and the network driver cleans it up. Neither the hypervisor driver nor Quantum need to know anything about how network migration works.
  • the VM is destroyed. We tell the driver to destroy the endpoint.

Improvements

  • this control flow is mostly independent of the hypervisor driver and can be implemented generically, simplifying all hypervisor drivers
  • and there's no need to select a plugging driver corresponding to our network type; the various plugin types all go away, exchanged for a bit of code in the hypervisor driver that's self-selecting.
  Open question: in some cases in libvirt you can choose one of several drivers in a given situation.  This would be lost with this proposed method.  Are we losing something important?
  • There's a better division of responsibility between what Nova, the hypervisor driver, and the network code should do, making it much easier to attach Quantum to Nova cleanly
  • the calling interface to Quantum is clean and well defined, meaning that it's easy to test in the absence of Nova. Similarly, it's possible to test the hypervisor driver function easily in the absence of Quantum.