Jump to: navigation, search

Difference between revisions of "Spec-QuantumMidoNetPlugin"

Line 48: Line 48:
 
[[Image:Spec-QuantumMidoNetPlugin$image2.png]]
 
[[Image:Spec-QuantumMidoNetPlugin$image2.png]]
  
The plugging of VMs is handled by both Nova's Libvirt VIF driver and [[MidoNet]] agent that is running on each compute host.  VIF driver is responsible for creating a tap interface for the VM, and the [[MidoNet]] agent is responsible for binding the tap interface to the appropriate virtual port.  To enable the MidoNet Libvirt VIF driver, you need to specify its path in Nova's configuration file (shown in the Configuration variables section).  It is not one of Nova's available default VIF drivers because it requires that the VM's interface type to be 'generic ethernet' in Libvirt, but as of G-3, this interface type is not supported.  You would have to get the VIF driver from GitHub: https://github.com/midokura/midonet-openstack. It is VIF driver's responsibility to notify MidoNet agent to bind the interface to a port using MidoNet API, which would effectively plug a VM into MidoNet virtual network.
+
The plugging of VMs is handled by both Nova's Libvirt VIF driver and MidoNet agent that is running on each compute host.  VIF driver is responsible for creating a tap interface for the VM, and the MidoNet agent is responsible for binding the tap interface to the appropriate virtual port.  To enable the MidoNet Libvirt VIF driver, you need to specify its path in Nova's configuration file (shown in the Configuration variables section).  It is not one of Nova's available default VIF drivers because it requires that the VM's interface type to be 'generic ethernet' in Libvirt, but as of G-3, this interface type is not supported.  You would have to get the VIF driver from GitHub: https://github.com/midokura/midonet-openstack. It is VIF driver's responsibility to notify MidoNet agent to bind the interface to a port using MidoNet API, which would effectively plug a VM into MidoNet virtual network.
  
 
TODO: Image 3 -> Hypervisor host diagram with midonet agent, nova compute and vif driver, and a mapping of tap + virtual port + VM.
 
TODO: Image 3 -> Hypervisor host diagram with midonet agent, nova compute and vif driver, and a mapping of tap + virtual port + VM.

Revision as of 13:53, 6 February 2013

Quantum MidoNet Plugin

'Scope:'

The goal of this blueprint is to implement Quantum plugin for MidoNet virtual networking platform.

Use Cases':'

To provide and enable MidoNet virtual networking technology as one of the options for those using Quantum as cloud networking orchestration.

Some of the benefits that come from using MidoNet in your IaaS cloud are:

  • the ability to scale IaaS networking into the thousands of compute hosts
  • the ability to offer L2 isolation which is not bounded by the VLAN limitation (4096 unique VLANs)
  • making your entire IaaS networking layer completely distributed and fault-tolerant

Please note that, at the time of this writing, MidoNet only works with Libvirt/KVM.

Implementation Overview:

In the MidoNet virtual topology that gets constructed in the integration with Quantum, there is one virtual router that must always exist, called provider virtual router. It is a virtual router belonging to the service provider, and it is capable of routing traffic among the tenant routers as well as routing traffic into and out of the provider network. The virtual ports on this router are mapped to the interfaces on the 'edge' servers that are connected to the service provider's uplink routers. Additionally, to enable metadata service, another virtual router called metadata virtual router and a virtual bridge called metadata bridge must exist. The metadata bridge is connected to the metadata router, and there is a virtual port on this bridge that is mapped to an interface on the host that the nova-api (metadata server) is running on, effectively allowing the metadata traffic from the VMs to traverse through the virtual network to reach the metadata server. Note that there are additional steps needed to configure the metadata host so that it accepts both the VM traffic and the management traffic properly, but that is out of the scope of the plugin. It is expected that these virtual routers has already been set up and configured for the MidoNet plugin to function properly. The initial setup of the virtual topology looks as follows:

File:Spec-QuantumMidoNetPlugin$image1.png

The files of the plugin are shown below:


quantum/quantum/plugins/midonet/__init__.py
quantum/quantum/plugins/midonet/midonet_lib.py
quantum/quantum/plugins/midonet/plugin.py
quantum/etc/plugins/midonet/midonet_plugin.ini


MidoNetPluginV2 class in 'midonet/plugin.py' extends db_base_plugin_v2.QuantumDbPluginV2, l3_db.L3_NAT_db_mixin, and portsecurity_db.PortSecurityDbMixin, and implements all the necessary methods to provide the Quantum L2, L3 and security groups features:



class MidonetPluginV2(db_base_plugin_v2.QuantumDbPluginV2, l3_db.L3_NAT_db_mixin, portsecurity_db.PortSecurityDbMixin):


When a tenant creates a network in Quantum, a tenant virtual bridge is created in MidoNet. Just like a Quantum network, a virtual bridge has ports. When a VM is attached to Quantum network's port, it is also attached to MidoNet's virtual bridge port. VMs attached to a virtual bridge have L2 connectivity. When a tenant creates a router in Quantum, a tenant virtual router is created in MidoNet. Just like l3_agent in Quantum, tenant virtual router acts as the gateway for the tenant networks, and it can also do NAT to implement the floating IP feature. Thus with MidoNet plugin, there is no need to run l3_agent. The tenant router is linked to the provider router in 'router_gateway_set' method, and the tenant bridge is linked to the tenant router in 'router_interface_add' method, connecting the VMs to the internet. Quantum's external networks are treated differently, as they are linked directly to the provider router, and thus it is called provider bridge. After the tenant bridges and routers are created and connected by Quantum API, the virtual topology might look as follows:

File:Spec-QuantumMidoNetPlugin$image2.png

The plugging of VMs is handled by both Nova's Libvirt VIF driver and MidoNet agent that is running on each compute host. VIF driver is responsible for creating a tap interface for the VM, and the MidoNet agent is responsible for binding the tap interface to the appropriate virtual port. To enable the MidoNet Libvirt VIF driver, you need to specify its path in Nova's configuration file (shown in the Configuration variables section). It is not one of Nova's available default VIF drivers because it requires that the VM's interface type to be 'generic ethernet' in Libvirt, but as of G-3, this interface type is not supported. You would have to get the VIF driver from GitHub: https://github.com/midokura/midonet-openstack. It is VIF driver's responsibility to notify MidoNet agent to bind the interface to a port using MidoNet API, which would effectively plug a VM into MidoNet virtual network.

TODO: Image 3 -> Hypervisor host diagram with midonet agent, nova compute and vif driver, and a mapping of tap + virtual port + VM.

MidoNet comes with its own DHCP implementation, so the Quantum's DHCP agent does not need to run. Each virtual bridge has MidoNet's DHCP service is associated. MidoNet DHCP service stores the subnet entries which are mapped to Quantum subnets (However, MidoNet currently only supports one subnet per network). The Quantum subnets are registered to MidoNet DHCP service when created. The DHCP requests are intercepted by the MidoNet agent running on the hypervisor host and handles them appropriately there. The following diagram shows how DHCP is handled in MidoNet on a compute host:

TODO: Image4 -> Hypervisor host diagram with midonet agent and two VMs, with DHCP handling (include Quantum subnets)

As mentioned before, MidoNet plugin extends SecurityGroupDbMixin, and implements all the security groups features using MidoNet's packet filtering API. Because MidoNet handles the entire security groups implementation, Quantum's security group agent does not need to run. The following diagram shows how the filter rules are applied in the virtual network:

TODO: Image5 -> Virtual topology diagram of a bridge and two VMs, with 'inbound' and 'outbound' filters, mapped to Quantum security group

As a final implementation note, the plugin will support metadata server, but not with overlapping IP address support for G-3.

Putting it all together, here is an example of MidoNet virtual network topology:

TODO: Image6 -> The entire topology that includes provider router, metadata router/bridge, 2 tenant routers, 1 external bridge, 2 tenant bridges on one router, 1 tenant bridge on the other router

Configuration variables:

1. To specify MidoNet plugin in Quantum (quantum.conf):


core_plugin = quantum.plugins.midonet.plugin.MidonetPluginV2


2. MidoNet plugin specific configuration parameters (midonet_plugin.ini):



[midonet]
midonet_uri = <MidoNet API server URI>
username = <MidoNet admin username>
password = <MidoNet admin password>
provider_router_id = <Virtual provider router ID>
metadata_router_id = <Virtual metadata router ID>


3. Set MidoNet Libvirt VIF driver in Nova's configuration (nova.conf):



libvirt_vif_driver=midonet.nova.virt.libvirt.vif.MidonetVifDriver


Test Cases:

We will identify the parts in the plugin code that are prone to bugs and prepare unit tests for them.