Jump to: navigation, search

Neutron/ML2

< Neutron
Revision as of 19:53, 31 July 2013 by Mestery (talk | contribs)

Neutron ML2

The Modular Layer 2 (ml2) plugin is a framework allowing OpenStack Networking to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers. It currently works with the existing openvswitch, linuxbridge, and hyperv L2 agents, and is intended to replace and deprecate the monolithic plugins associated with those L2 agents. The ml2 framework is also intended to greatly simplify adding support for new L2 networking technologies, requiring much less initial and ongoing effort than would be required to add a new monolithic core plugin. A modular agent may be developed as a follow-on effort.

ML2 Drivers

Drivers within ml2 implement separately extensible sets of network types and of mechanisms for accessing networks of those types. Unlike with the metaplugin, multiple mechanisms can be used simultaneously to access different ports of the same virtual network. Mechanisms can utilize L2 agents via RPC and/or use mechanism drivers to interact with external devices or controllers. Type and mechanism drivers are loaded as python entrypoints using the stevedore library.

Each available network type is managed by an ml2 TypeDriver. TypeDrivers maintain any needed type-specific network state, and perform provider network validation and tenant network allocation. The ml2 plugin currently includes drivers for the local, flat, vlan, gre and vxlan network types.

Each networking mechanism is managed by an ml2 MechanismDriver. Support for mechanism drivers is currently a work-in-progress in pre-release Havana versions, and the interface is subject to change before the release of Havana. MechanismDrivers are currently called both inside and following DB transactions for network and port create/update/delete operations. In a future version, they will also called to establish a port binding, determining the VIF type and network segment to be used.

Multi-Segment Networks

Virtual networks can be composed of multiple segments of the same or different types. The database schema and driver APIs support multi-segment networks, but the client API for multi-segment networks is not yet implemented.

ML2 Configuration

Using ML2 in Devstack

The ML2 plugin is fully supported in devstack. Support for configuring VLAN, GRE, and VXLAN networks is supported. The steps to configure these are covered here.

Configure devstack for ML2 with VLANs

Configure devstack for ML2 with Tunnel Networks

An example control and compute node localrc file is shown here for configuring ML2 to run with tunnel networks with devstack. This is the most basic form of configuring ML2, and is equivalent to running the OVS plugin with GRE tunnels.

Add the following to your control node localrc:
Q_PLUGIN=ml2
ENABLE_TENANT_TUNNELS=True

On your compute node, add the following into your localrc:
Q_PLUGIN=ml2
ENABLE_TENANT_TUNNELS=True

The above will enable GRE tunnels with OVS. If you want to use VXLAN with OVS, ensure you are running OVS version 1.10 or greater, including the Open vSwitch KLM from the upstream OVS project. Once you have that, the following will enable ML2 with VXLAN tunnels:

Add the following to your control node localrc:
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan

On your compute node, add the following into your localrc:
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan

Meetings

Meeting information can be found here:

https://wiki.openstack.org/wiki/Meetings/ML2

Presentations