Jump to: navigation, search

Neutron-Linux-Bridge-Plugin

Revision as of 07:35, 10 January 2012 by Snaiksat (talk)

Update: Currently there is an issue with the implementation. The DHCP response is not reaching back to the VM. This might be an issue with the IP tables rules, under investigation.

Quantum L2 Linux Bridge Plugin

<<TableOfContents()>>

Abstract

The proposal is to implement a Quantum L2 plugin that configures a Linux Bridge to realize Quantum's Network, Port, and Attachment abstractions. Each Quantum network would map to an independent VLAN managed by the plugin. Sub-interfaces corresponding to a VLAN would be created on each host, and a Linux Bridge would be created enslaving that sub-interface. One or more VIFs (VM Interfaces) in that network on that host would then plug into that Bridge. To a certain extent this effort will achieve the goal of creating a Basic VLAN Plugin (as discussed in the Essex Summit) for systems which support a Linux Bridge.

Requirements

Support for Linux Bridge (brctl package).

Design

Plugin manages VLANs. The actual network artifacts are created by an agent (daemon) running on each host on which the Quantum network has to be created. This agent-based approach is similar to the one employed by the OpenVSwitch plugin.

The diagram below explains the working of the plugin and the agent in the context of creating networks and ports, and plugging a VIF into the Quantum port.

|alt Quantum L2 Linux Bridge Plugin Operation| width=800

  1. The tenant requests the creation of a Quantum network and a port on that network. The plugin creates a network resource and assigns a VLAN to this network. It then creates a Port resource and associates it with this network.
  2. The tenant requests the instantiation of a VM. Nova-compute will invoke the Linux-bridge VIF driver (this driver is different from the Linux bridge VIF driver that comes packaged with Nova) will create a tap device. Subsequently nova-compute will instantiate the VM such that the VM's VIF is associated with the tap device.
  3. The tenant will request plugging the above VIF into the Quantum port created earlier. The plugin will create the association of the VIF and the port in the DB.
  4. The agent daemon on each host in the network will pick up the association in created in Step 3.
  5. If a tap device exists on that host corresponding to that VIF, the agent will create a VLAN and a Linux Bridge on that host (if it does not already exist).
Note: A convention to use the first 11 characters of the UUID is followed to name the tap device. The agent deciphers the name of the tap device from the VIF UUID using this convention.
  1. The agent will subsequently enslave the tap device to the Linux Bridge. The VM is now on the Quantum network.

Integration with Nova

  1. A nova-compute VIF driver will be written. This VIF driver will be very similar to the one used by the OpenVSwitch plugin.
  2. Linux network driver (linux_net.py) extension will also be required so as to be able to plug the gateway and DHCP servers. This driver will also create a tap device for plugging the Gateway interface. The DHCP server will be association with this tap interface.

Handling the Gateway Interface

There is a bit of complexity involved with creating and initializing the Gateway interface, since following the set of operations will need to be performed in sequence:

  1. The aforementioned Gateway driver will first need to create the tap device and associated a MAC address provided by Nova with this device. This should happen in the "plug" hook of the driver and executes within nova's process space.
  2. The QuantumManager (network manager within nova that interfaces with Quantum) will plug the gateway tap device created earlier into a port on the network. This will result in a logical binding between the gateway tap device interface ID and the Quantum port/network (LinuxBridge plugin will handle this). This step would execute in the Quantum server's process space.
  3. The agent will running on the relevant host will pick up the logical binding and also the presence of the gateway tap device. It will try to enslave the gateway tap device to the relevant bridge (a new one will be created if it does not exist). This step would execute in the agent's process space.
  4. Once the gateway tap device is enslaved to the bridge, the gateway initialization would have to be done in the "initialize_gateway" hook of the linux network driver (linux_net.py) extension. This, among other things, will involve associating the DHCP IP address with the gateway tap device and sending a gratuitous ARP for this IP. This step would execute in nova's process space.

Note that in the above sequence of steps, steps 1 and 4 execute in the nova space, where as 2 and 3 don't. Which implies that step 4 execution has to be delayed until steps 2 and 3 are executed. This will be achieved spawning a thread to execute step 4, such that the thread will wait until the gateway device is enslaved to the bridge, and the gateway initialization will happen.

(Contact: Sumit Naiksatam, Salvatore Orlando) <
>