Jump to: navigation, search

Quantum-BasicVlanPlugin

'Basic VLAN' plugin for Quantum

Intended Design and implementation plan

At the moment this wiki page is just a sparse collection of free-standing thoughts. Hopefully, these thoughts will become more organic in the next few days...

Caution: most of these thoughts were gathered after midnight, so some of them could turn out to be just nonsenses. And by writing 'some' I'm being quite optimistic.

The general architecture for this plugin is made of a dispatcher component, which becomed aware of the particular compute node (ie: hypervisor) where the instance is being created, and forwards the API request to the appropriate driver, which knows how to deal with networking on each hypervisor.

This model however implies that the Quantum plugin will be doing not just the 'logical' plugging of the VIF/vNIC, but also the 'physical' plugging, which is not what happens today. The current workflow can be summarized as follows:

  1. Compute Manager invokes Network manager for allocating networks for instance (compute.manager._make_network_info)
  2. The Quantum Network Manager: i) creates VIFs into the DB, ii) creates port in quantum network, iii) attaches VIF to port<
    >NOTE: the attach operation is merely logical at this point as there's not yet a VM at all!
  3. Compute Manager invokes a spawn operation on the Compute Driver
  4. The Compute Driver uses the VIF driver for setting up networks for VIFs
  5. The Compute Driver creates the VIF for the VM (with a few differences between KVM,XenAPI, and ESX)
  6. The Compute Driver starts the VM, with the VIFs attached to the appropriate network

Given this workflow, it turns out that the operations that the Basic VLAN plugin would performs are exactly the same as the ones performed by the VIF driver. It makes therefore sense to leverage the VIF driver and leave the workflow as it is.

Therefore a first potentially extremely simple implementation of this plugin would consist of a simple 'VLAN ID' tracker, as it will just provide the VIF driver the appropriate VLAN ID to configure on each compute host.

Once we get to this stage we can start working on a more complex and detailed implementation. Being at this stage it is already a win for us as we will have a plugin which works (at least) with libvirt, XenAPI, and VMWareAPI.

I would not reject the idea of improving the above workflow in order to reduce the coupling between Nova and Quantum (a large chunk of networking operations is currently performed in nova through the driver). The ideal destination point would be ending up in a situation in which we would not need anymore a VIF driver at all.

Question: does the above point make sense? Possible argument: if the VIF driver is hypervisor-specific, then it deserves to be under the control of the entity which governs the hypervisor, i.e.: nova-compute. Counter-argument: Quantum deserves to govern the network subsystem of the hypervisor itself. As nova-compute manages CPU and memory virtualization, Quantum will manage network virtualization in the HV (and I think nova-volume block storage virtualization)

Improvement #1 - The Basic VLAN Quantum plugin provides VIF drivers. This is just a refactoring. The VIF driver is then normally loaded in nova-compute.

Improvement #2 - Quantum does the 'physical' plugging in place of the VIF driver, not anymore only the 'logical' one. This means that nova should be able to run without a real VIF driver. The compute manager could provide Quantum with host name and type (ESX, KVM, XEN) - for efficiency reasons everything should occur in a single call to Quantum Manager; - potential issue: Compute Drivers rely on VIF driver for generating network info. Investigate whether we could have a generic mock vif driver.

Improvement #3 - Go in a position in which this Quantum plugin could work with other IaaS solutions and potentially even in standalone mode. To the best of my knoweldge this is somehow already achieved today with the OVS plugin and maybe even with the UCS plugin (altough the VIF driver is still necessary). - we will also need a generic and possibly agent solution for detecting new VIFs attached to bridges on hypervisors.