Jump to: navigation, search

Neutron/ML2

Neutron ML2

The Modular Layer 2 (ml2) plugin is a framework allowing OpenStack Networking to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers. It currently works with the existing openvswitch, linuxbridge, and hyperv L2 agents, and is intended to replace and deprecate the monolithic plugins associated with those L2 agents. The ml2 framework is also intended to greatly simplify adding support for new L2 networking technologies, requiring much less initial and ongoing effort than would be required to add a new monolithic core plugin. A modular agent may be developed as a follow-on effort.

ML2 Drivers

Drivers within ml2 implement separately extensible sets of network types and of mechanisms for accessing networks of those types. Unlike with the metaplugin, multiple mechanisms can be used simultaneously to access different ports of the same virtual network. Mechanisms can utilize L2 agents via RPC and/or use mechanism drivers to interact with external devices or controllers. Type and mechanism drivers are loaded as python entrypoints using the stevedore library.

Type Drivers

Each available network type is managed by an ml2 TypeDriver. TypeDrivers maintain any needed type-specific network state, and perform provider network validation and tenant network allocation. The ml2 plugin currently includes drivers for the local, flat, vlan, gre and vxlan network types.

Mechanism Drivers

Each networking mechanism is managed by an ml2 MechanismDriver. The MechanismDriver is responsible for taking the information established by the TypeDriver and ensuring that it is properly applied given the specific networking mechanisms that have been enabled.

The MechanismDriver interface currently supports the creation, update, and deletion of network and port resources. For every action that can be taken on a resource, the mechanism driver exposes two methods - ACTION_RESOURCE_precommit, which is called within the database transaction context, and ACTION_RESOURCE_postcommit, called after the database transaction is complete. The precommit method is used by mechanism drivers to validate the action being taken and make any required changes to the mechanism driver's private database. The precommit method should not block, and therefore cannot communicate with anything outside of Neutron. The postcommit method is responsible for appropriately pushing the change to the resource to the entity responsible for applying that change. For example, the postcommit method would push the change to an external network controller, that would then be responsible for appropriately updating the network resources based on the change.

Support for mechanism drivers is currently a work-in-progress in pre-release Havana versions, and the interface is subject to change before the release of Havana. In a future version, the mechanism driver interface will also be called to establish a port binding, determining the VIF type and network segment to be used.

ALE Omniswitch Mechanism Driver
https://wiki.openstack.org/wiki/Neutron/ML2/ALE-Omniswitch
Arista Mechanism Driver
https://wiki.openstack.org/wiki/Arista-neutron-ml2-driver
Avaya Networking Mechanism Driver
https://wiki.openstack.org/wiki/Neutron/ML2/AvayaML2Mechanism
Brocade Mechanism Driver
https://wiki.openstack.org/wiki/Neutron/ML2/BrocadeML2Mechanism
Cisco Nexus Mechanism Driver
https://wiki.openstack.org/wiki/Neutron/ML2/MechCiscoNexus
DCFabric Mechanism Driver
https://wiki.openstack.org/wiki/DCFabric-neutron-plugin
Lenovo Networking Mechanism Driver
https://wiki.openstack.org/wiki/Neutron/ML2/LenovoML2Mechanism

Multi-Segment Networks

Virtual networks can be composed of multiple segments of the same or different types. The database schema and driver APIs support multi-segment networks, but the client API for multi-segment networks is not yet implemented.

ML2 Configuration

Using ML2 in Devstack

The ML2 plugin is fully supported in devstack. Support for configuring VLAN, GRE, and VXLAN networks is supported. The steps to configure these are covered here.

Configure devstack for ML2 with VLANs

An example control and compute node localrc file is shown here for configuring ML2 to run with VLANs with devstack. This is equivalent to running the OVS or LinuxBridge plugins in VLAN mode.

Add the following to your control node localrc:
Q_PLUGIN=ml2
ENABLE_TENANT_VLANS=True
ML2_VLAN_RANGES=mynetwork:100:200

To set special VLAN parameters for the VLAN TypeDriver, the following variable in localrc can be used. This is a space separate list of assignment values:
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=600:700)

Configure devstack for ML2 with Tunnel Networks

An example control and compute node localrc file is shown here for configuring ML2 to run with tunnel networks with devstack. This is the most basic form of configuring ML2, and is equivalent to running the OVS plugin with GRE tunnels.

Add the following to your control node localrc:
Q_PLUGIN=ml2
ENABLE_TENANT_TUNNELS=True

On your compute node, add the following into your localrc:
Q_PLUGIN=ml2
ENABLE_TENANT_TUNNELS=True

To change the range of GRE keys to use for tunnel keys, add the following to localrc:
TENANT_TUNNEL_RANGE=50:100

The above will enable GRE tunnels with OVS. If you want to use VXLAN with OVS, ensure you are running OVS version 1.10 or greater, including the Open vSwitch KLM from the upstream OVS project. Once you have that, the following will enable ML2 with VXLAN tunnels:

Add the following to your control node localrc:
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan

On your compute node, add the following into your localrc:
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan

To change the range of VXLAN VNIs to use, add the following to localrc:
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=400:500)

Advanced ML2 configuration in devstack

By default, devstack will run ML2 with the OVS agent. To use a different agent, set the following in localrc:
Q_AGENT=linuxbridge

By default, ML2 will not load any MechanismDrivers, and will only work with the OVS, LinuxBridge, and Hyper-V agents. To change this, set the following in localrc. Valid values are the names of MechanismDrivers you want to use:
Q_ML2_PLUGIN_MECHANISM_DRIVERS=<list of MechansimDrivers>

By default, all the TypeDrivers for ML2 are loaded. To change this behavior, set the following in localrc. Valid options for this are the following: local,flat,vlan,gre,vxlan.
Q_ML2_PLUGIN_TYPE_DRIVERS=vlan,gre


Meetings

Meeting information can be found here:

https://wiki.openstack.org/wiki/Meetings/ML2

Presentations