Neutron/ML2/ALE-Omniswitch

= OmniSwitch Plug-in for OpenStack = The OmniSwitch OpenStack Networking Plug-in (OONP) for OpenStack offers infrastructure services for OpenStack logical networks by coordinating the orchestration of Alcatel-Lucent OmniSwitches as the underlying physical network. When used in conjunction with the OpenVSwitch plug-in, end-to-end multi-tenant network provisioning through OpenStack Networking (Quantum/ Neutron) is achieved.

The plug-in is intended to be installed in existing OpenStack environments to configure the underly- ing physical network for its cloud networking operations.


 * Note. This OONP release supports only the base plug-in features such as link aggregation, Virtual Chassis, VPA-based networking, and driver support for Telnet and REST. Advanced features such as UNP, MVRP, SPB, and QoS are not currently supported.

OmniSwitch OpenStack Networking Plug-in Architecture
The OpenStack Networking ML2 plug-in provides an extensible architecture that supports a variety of mechanism drivers for configuring physical networks. The architecture provides an environment where multiple independent drivers can be used to configure different network devices from different vendors. Each driver uses its own internal mechanism to communicate with their respective network elements. From the OpenStack Neutron-server perspective a single interface is provided through which the required networking services are provided to the OpenStack cloud applications. This allows OpenStack Network- ing to configure the physical network as well as the virtual switch instances running on the hypervisors. In addition, the OpenStack Networking L3-agent and DHCP-agent are fully supported.

The OmniSwitch Openstack Networking Plug-in work with the OVS ML2 driver

The OmniSwitch ML2 Mechanism Driver supports the following hardware platforms with its respective AOS software releases.
 * OS6900 and OS10K with AOS 732-R01 SW release and above


 * OS685X and OS9000 with AOS 645-R02 SW release and above


 * OS6250 and OS6450 with AOS 664-R01 and above, limited to the features supported on this platform


 * OS6860 and OS6860E with AOS 811-R01.



In this deployment scenario, the OmniSwitch ML2 Mechanism Driver uses configuration elements from the OpenVSwitch database to ensure configuration consistency and consistent VLAN ID assignment between it and the OVSNeutronPluginV2 plug-in.

VLAN Based Tenant Networks
The plug-in supports the VLAN network type. This means that logical networks (also referred to as “tenant networks”) are realized using VLANs in the OmniSwitch; thus, any operation related to tenant networks performed in OpenStack Networking is translated and configured in the OmniSwitch using VLANs. The VLAN id is obtained from the reserved VLAN range defined in the plug-in configuration file.

VLAN ID assignment is efficiently and intelligently used on switch ports by provisioning and de-provisioning VLANs across switches as virtual machines connected to tenant networks are created and destroyed.

Moreover, connectivity from the compute hosts to the physical network is trunked to allow traffic only from the VLANs configured on the host by the virtual switch.

The compute host VLAN trunk configuration on the OmniSwitch is automated through the use of the Alcatel-Lucent VNP mechanism. VNP classification is performed on the basis of either: the MAC address of the virtual machine; or the VLAN tag. While both methods are supported, only one can be used within an OpenStack installation instance. Network node connectivity (DHCP and L3 services) are automatically managed using VLAN-Port-Assignment.

Additional Supported Features
The following additional features are also supported in this release.

Device specific switch_access_method
The feature allows a different access method to be used per individual device in the topology (TELNET or REST). If an AOS 6.X device is mistakenly configured to use REST interface, an ALERT message will be logged and TELNET will be used automatically on those devices.
 * Note. This parameter is added as an additional entry in the existing device configuration. Please refer to the .ini file for the proper format.

VPA based host classification method
In addition to supporting UNP profiles on the edge switches for the interfaces connected to the Compute nodes, static 802.1q tagged plain Vlan-Port-Association (VPA) method is also supported. This is useful mainly for supporting an OS6450 as an edge switch.

Switch save config interval
The range of time interval to save the switch configuration periodically is defined to be 600 - 1800 seconds. If any value is configured out of this range, an ALERT message is logged and the minimum value (600 seconds) is used automatically.

OmniSwitch OpenStack Networking Plug-in (OONP) Overview
OmniSwitch OpenStack Networking Plug-in (OONP) provides OpenStack Networking (Neutron) with the ability to manage the Layer-2 configuration of OmniSwitch devices. The OONP plug-in supports the following features:


 * 802.1q VLAN based tenant networks.


 * Multiple physical topologies - ranging from a single switch to multi-switch based core-edge and spine- leaf topologies.
 * Edge port (Host VM connections) automatic configuration based on VLAN Port Assignment (VPA). The following product matrix shows which features are supported/used for physical OmniSwitch configuration:

While a mix of switch features is supported by the plug-in, only one Port Configuration method can be used in the OpenStack configuration. For example if you choose to use MVRP to configure Uplink Ports, all switches MUST support the MVRP feature. This is the case for BOTH the Edge Port Configuration and Uplink Port Configuration parameters.

OmniSwitch OpenStack Networking Plug-in Installation
The plug-in is delivered as a tar.gz file that contains the python modules, supporting applications which can be found on pypi site:
 * https://pypi.python.org/pypi/networking-ale-omniswitch

The plug-in is installed on the OpenStack controller node and can be performed by using pip command:

pip install networking-ale-ominiswitch

Neutron must be configured to use the OONP at this point. The typical installation will use a combination of the OONP along with the OVS plug-in. The plug-in configuration is defined in two (2) files in the Ubuntu environment on the controller node:

1. /etc/init/neutron-server.conf: script [ -x "/usr/bin/neutron-server" ] || exit 0 [ -r /etc/default/openstack ] &&. /etc/default/openstack [ -r /etc/default/neutron-server ] &&. /etc/default/neutron-server [ -r "$NEUTRON_PLUGIN_CONFIG" ] && DAEMON_ARGS="$DAEMON_ARGS --config-file=$NEUTRON_PLUGIN_CONFIG" [ "x$USE_SYSLOG" = "xyes" ] && DAEMON_ARGS="$DAEMON_ARGS --use-syslog" [ "x$USE_LOGFILE" != "xno" ] && DAEMON_ARGS="$DAEMON_ARGS --log-file=/var/log/neutron/neutron-server.log" exec start-stop-daemon --start --chuid neutron --exec /usr/bin/neutron-server -- \ --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/omniswitch_network_plugin.ini ${DAEMON_ARGS} end script

2. /etc/neutron/neutron.conf - the plug-in configuration is specified using the following entries: core_plugin = ml2

Manual Changes Required for Neutron Configuration
After the successful installation of OONP plug-in, manual changes are required for the Neutron’s configu- ration. The required manual changes are marked in red below:

1. In ‘/etc/neutron/plugins/ml2/ml2_conf.ini’, update the following line to include omniswitch as below. mechanism_drivers = openvswitch,omniswitch

2. Update the OONP’s configuration file with the topology details and configuration options that you want to use. /etc/neutron/plugins/ml2/omniswitch_network_plugin.ini

3. Restart the neutron-server. > service neutron-server restart

After running the above steps, you are now ready to use Openstack Neutron APIs and/or Horizon web UI or CLI to manage the cloud network.

OmniSwitch OpenStack Networking Plug-in Configuration
The plug-in requires configuration data about the plug-in operational details, details of the devices, the physical network topology, and the mechanisms to be used to provide the end-to-end connectivity across core and edge network switches.

When the plug-in is installed, the configuration file will have “default” values in it. These values must modified by the user before using the plug-in.

Any changes made in the configuration file during run-time require the Neutron-server to be restarted for the changes to take effect.

Below is an example omniswitch_network_plugin.ini file. Additionally refer to the “OmniSwitch OONP Installation and Multi-Switch Edge-Core Topology Example” chapter for example topologies and their associated configuration parameters.

[ml2_ale_omniswitch] # This is used to define the edge switches and which ports the compute/network # nodes are attached to. The entry may contain 1 or more device definitions. # The definition is: # ::: :: # :, with blank entries for user-name, password, # or command-prompt specifying default values. A blank value for access_method will # result in using the global switch_access_method omni_edge_devices = 192.168.222.33:OS6900:	:	:	:	:TELNET:1/19, 192.168.222.35:OS6900:	:	:	:	:TELNET:1/16:1/20 # Used to define 0, or more, core switch configurations; the entry follows the same # format as that of the omni_edge_device. omni_core_devices = 192.168.222.34:OS6900:	:	:	:	:TELNET:1/19 1/20 # This is used to specify which switch and port the DHCP server (network node) is  # connected. dhcp_server_interface = 192.168.222.33:	:	:	:	:TELNET:1/18 # The default global method to access devices if not overridden by the # switch_access_method # parameter of the omni_edge_devices, omni_core_devices, and/or # dhcp_server_interface. #  switch_access_method = TELNET # SWITCH_SAVE_CONFIG_INTERVAL: # This is used to specify how often (in seconds) the config changes in the switches # are to be saved. # The valid range is 600 - 1800 seconds. If value is out of range, it will default # to 600 seconds. switch_save_config_interval = 600

=Topology Examples= This chapter presents example OpenStack topologies with the associated OmniSwitch OpenStack Networking Plug-in (OONP) configurations. The configurations presented are based on two (2) variants of topologies:


 * A Single OS6900 switch


 * Multiple OS6900 switches.

Network configuration and management is achieved by using a combination of the OONP to configure the physical switches and OpenVSwitch (OVS) to configure the compute and network nodes.

The OONP supports tenant network isolation using VLANs; therefore, the OpenStack instance must generally be configured to use VLANs. These configurations do not use the flat-dhcp or tunnel based tenant topologies (GRE or VxLAN).

The OONP supports multiple OmniSwitch product families as well as a variety of configuration and management methods. Generally, only one classification method and one uplink management method is available for a given OpenStack instance. For example, if VLAN classification is selected as the edge port configuration method it will be used for all edge switch configurations. Likewise if MVRP is chosen as the uplink port configuration method; MVRP will be used for all switches. This results in an OONP configura- tion that utilizes the common configuration options amongst all of the switches within the physical topology.

The configurations presented in this document are based on the package release for OpenStack Liberty.

OmniSwitch OONP Installation and Multi-Switch Edge-Core Topology Example
This topology utilizes three (3) OmniSwitch 6900s in a core-edge configuration. Due to the homogeneous use of the OS9600 switch in this topology, the advanced configuration features of MVRP and vNP can be utilized.

The physical network is composed of two (2) 'edge switches', which provide end-station connectivity; and one (1) 'core switch'. It should be noted that 'edge-switches' have OpenStack nodes (and possibly other computing devices) connected to them; while 'core-switches' are connected only to other switches; either 'cores' or 'edges'.

The compute node VLAN trunk configuration on the OmniSwitch is automated through the use of the Alcatel-Lucent vNP mechanism. The VM mac-address will be used to identify and create the correct 802.1q VLAN tag configuration on the edge port. (Called mac-address classification).

The Network node VLAN trunk configuration is managed by the OmniSwitch plug-in using VLAN-Port- Assignment (VPA). (This is required because vNP classification occurs on ingress and the dhcp server, located on the network node, must have static VLAN connectivity to receive the tenant VM dhcp DISCOVER broadcast message).

VLAN uplink connectivity between the edge switches and the core switch is learned and configured auto- matically using MVRP.

This example provides steps on installing and configuring OONP. The physical configuration is composed of: two (2) compute nodes; One (1) network node; and a separate controller node. There are three (3) physical networks (refer to “Figure: Multi-Switch Edge-Core Topology" ):


 * The 'public' network, which has connections to both the controller and network nodes


 * The internal management network, which has connections to all devices and nodes


 * The 'private' tenant network, which has connections to the compute and network nodes

The following switch configuration assumptions are used in the example:


 * The management/admin network is 10.1.2.0/24


 * The EMP is used as the management interface


 * The tenant VLAN range is 1005-1015


 * The OmniSwitch device plug-in will use the factory default credentials to login to the switches (admin:switch)


 * The configuration assumes the factory default prompt on all of the switches (->)


 * The OmniSwitch device plug-in will use the telnet driver to communicate with the switches.

Refer to the Multi-Switch Topology OONP configuration file example for the following parameters:


 * The general OmniSwitch plug-in configuration elements selecting: MAC_ADDRESS qualification; MVRP configuration; and TELNET switch communication methods are shown green.


 * The VLAN range configuration, mapping the usage of VLANs 1005-1015 is shown in red.

The switch connection topology in “Figure: Multi-Switch Edge-Core Topology" is mapped directly into the omni_edge_device, omni_core_device, and dhcp_server_interface configuration parameters, shown in blue in the example OONP configuration file. Each parameter may have multiple switch definitions separated by commas ','. The configuration attributes for the switch definition are as follows:

::::: ::

In this example the factory defaults are used for authentication credentials and the default CLI prompt. Additionally the plugin global switch_access_method will be used; producing an entry of the following format::

n.n.n.n:OS6900: : : : :X:Y, where; n.n.n.n is the switch management IP address,

X is the compute or network node endpoint connection port (in slot/port format), Y is the inter-switch (uplink) connection port.


 * Note. The device specifications may include multiple interface ports for both the endpoint and inter- switch interfaces separated by spaces. Additionally, the network node is specified separately in the dhcp_server_interface parameter and should not be included in the *_device configuration parameters.

The corresponding OVS agent configuration file for this example is shown in configuration example. It should be noted that the tenant_network_type and network_vlan_ranges parameters duplicate the definitions in the OONP configuration file found on the controller node.

omniswitch_network_plugin.ini entries: [ml2_ale_omniswitch] omni_edge_devices = 10.1.2.33:OS6900: : : : :TELNET:1/16:1/19, 10.1.2.35:OS6900: : : : :TELNET:1/16:1/20 omni_core_devices = 10.1.2.34:OS6900: : : : :TELNET:1/19 1/20 dhcp_server_interface = 10.1.2.33:OS6900: : : : :TELNET:1/15 switch_access_method = TELNET switch_save_config_interval = 600 ml2_conf.ini entries:

[OVS] bridge_mappings = physnet1:br-eth1 integration_bridge = br-int tenant_network_type = vlan network_vlan_ranges = physnet1:1005:1015 [SECURITYGROUP] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

Installing and Configuring the OONP
To install and configure the OONP plug-in do the following.

1. Download and install OONP plugin by using pip command pip install networking-ale-ominiswitch

2. Edit the Neutron-server's configuration file to use the OmniSwitch Network plug-in. Edit "/etc/ neutron/neutron.conf" and update the 'core_plugin' parameter as follows: core_plugin = ml2

3. Edit "/etc/default/neutron-server" to indicate the plug-in configuration file to Neutron-server. Update the file with: NEUTRON_PLUGIN_CONFIG= /etc/neutron/plugins/ml2/ml2_conf.ini

# Edits for OONP plugin conf file inclusion OONP_CONFIG="--config_file /etc/neutron/plugins/ml2/omniswitch_network_plugin.ini" NEUTRON_PLUGIN_CONFIG="${NEUTRON_PLUGIN_CONFIG} ${OONP_CONFIG}"

4. Edit /etc/neutron/plugins/ml2/omniswitch_network_plugin.ini file to match Liberty topology port configuration.

5. Restart neutron server on controller (as root run "service neutron-server restart").

Multi-Switch All Edge Topology Example
This topology consists of three (3) OmniSwitch OS6900s in a mesh interconnect. Each of the switches has at least one (1) OpenStack node connected to it (either compute or network). This configuration does not contain a core switch so the configuration file will not have the omni_core_devices configuration element.

The compute node VLAN trunk configuration on the OmniSwitch is automated through the use of the Alcatel-Lucent vNP mechanism. The incoming 802.1q tag will be used to identify and create the correct 802.1q VLAN tag configuration on the edge port. (Called vlan classification)

As in the previous topology, the Network node VLAN trunk configuration is managed by the OmniS- witch plug-in using VPA.

The inter-switch link 802.1q configuration is also managed using VPA.



Starting with a functioning OpenStack Liberty configuration (with a functioning Neutron OVS networking infrastructure). The instance is composed of: three (3) compute nodes; network node, and the controller. The instance is composed of three (3) networks:

The inter-switch link 802.1q configuration is also managed using VPA.


 * The ‘public’ network, which has connections to both the controller and network nodes


 * The internal management network, which has connections to all devices and nodes


 * The ‘private’ tenant network, which has connections to the compute and network nodes

The following switch configuration assumptions are used in the example:


 * The management/admin network is 192.168.2.0/24


 * The EMP is used as the management interface


 * The tenant VLAN range is 1005-1015


 * The OmniSwitch device plug-in will use the username of OpenStack and the password of secret to login to the switches


 * The configuration assumes the factory default prompt on all of the switches (->)


 * The OmniSwitch device plug-in will use the telnet driver to communicate with all of the switches (as specified by the plugin global switch_access_method. However, the REST method will be used to control the switch at 192.168.2.51.

Refer to the Multi-Switch Topology OONP configuration file example for the following parameters:


 * The general OmniSwitch plug-in configuration elements selecting: VLAN qualification; VPA configu- ration; and TELNET switch communication methods are shown green.


 * The VLAN range configuration, mapping the usage of VLANs 1005-1015 is shown in red.

The switch connection topology in “Figure: Multi-Switch All Edge Topology” is mapped directly into the omni_edge_device and dhcp_server_interface configuration parameters, shown in blue in the example OONP configuration file. Note that omni_core_device element is absent from the configu- ration. The configuration attributes for the switch definitions are as follows:

In this example the authentication credentials of openstack:secret and the default CLI prompt is used; producing an entry of the following format:


 * n.n.n.n:OS6900:openstack:secret: : :X:Y, where;


 * n.n.n.n is the switch management IP address,


 * X is the compute or network node endpoint connection port (in slot/port format), Y is the inter-switch (uplink) connection port.

omniswitch_network_plugin.ini entries:

[ml2_ale_omniswitch] omni_edge_devices = 192.168.2.50:OS6900:openstack:secret: : :TELNET:1/16:1/19 1/20, 192.168.2.51:OS6900:openstack:secret: : :REST:1/16:1/19 1/20, 192.168.2.52:OS6900:openstack:secret: : :TELNET:1/16:1/19 1/20 dhcp_server_interface = 192.168.2.50:OS6900:openstack:secret: : :TELNET:1/15 switch_access_method = TELNET switch_save_config_interval = 600

ml2_conf.ini entries:

[OVS] bridge_mappings = physnet1:br-eth1 integration_bridge = br-int tenant_network_type = vlan network_vlan_ranges = physnet1:1005:1015 [SECURITYGROUP] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

Single-Switch Topology Example
This configuration utilizes a single OmniSwitch OS6900 to provide the OpenStack tenant network inter- connectivity. The configuration is limited to a single ‘edge’ switch.

The compute node VLAN trunk configuration on the OmniSwitch is automated through the use of the Alcatel-Lucent vNP mechanism. The VM mac-address will be used to identify and create the correct 802.1q VLAN tag configuration on the host port.

The Network node VLAN trunk configuration is managed by the OmniSwitch plug-in using VLAN-Port- Assignment. (This is required because vNP classification occurs on ingress and the dhcp server located on the network node must have VLAN connectivity to receive the tenant VM dhcp DISCOVER broadcast message)

Starting with a functioning OpenStack Liberty configuration (with a functioning Neutron OVS networking infrastructure). The instance is composed of: two (2) compute nodes; and a combined controller/network node. The instance is composed of three (3) networks:


 * The ‘public’ network, which has a connection to the controller/network node;


 * The internal management network, which has connections to the OmniSwitch devices and nodes;


 * The ‘private’ data network, with connections to the compute and controller/network nodes.

The following switch configuration assumptions are used in the example:


 * The management/admin network is 192.168.1.0/24


 * The EMP is used as the management interface


 * The tenant VLAN range is 1005-1015


 * The OmniSwitch device plug-in will use the factory default credentials to login to the switch (admin:switch)


 * The configuration assumes the factory default prompt on the switch


 * The OmniSwitch device plug-in will use the telnet driver to communicate with the switch

Refer to the Single-Switch Topology OONP configuration file example for the following parameters:




 * The general OmniSwitch plug-in configuration elements selecting: MAC_ADDRESS qualification and TELNET switch communication methods are shown green.


 * The VLAN range configuration, mapping the usage of VLANs 1005-1015 is shown in red.

The switch connection topology in “Figure: Single Switch Topology” is mapped directly into the omni_edge_devices and dhcp_server_interface configuration parameters, shown in blue in the example OONP configuration file.

In this example the factory defaults are used for authentication credentials and the default CLI prompt; additionally, the plugin global switch_access_method will be used, However; this example does NOT make use of core-switch connections, producing an entry of the following format: n.n.n.n:OS6900: : : : :x:, where;


 * n.n.n.n is the switch management IP address,


 * x is the compute or network node endpoint connection port (in slot/port format),


 * Note the inter-switch (uplink) connection port is left blank.


 * Note. The device specifications includes multiple interface ports for both the endpoint interfaces sepa- rated by spaces ' '. Additionally, the network node is specified separately in the dhcp_server_interface parameter and should not be included in the *_device configuration parameters.

The corresponding OVS agent configuration file is shown in this example. It should be noted that the tenant_network_type and network_vlan_ranges parameters mirror the definitions in the OONP configu- ration file found on the controller node. Also pay special attention to the lack of the omni_core_devices and core_network_config configuration elements.

omniswitch_network_plugin.ini entries:

[ml2_ale_omniswitch] omni_edge_devices = 192.168.1.10:OS6900: : : : :TELNET:1/15 1/16: dhcp_server_interface = 192.168.1.10:OS6900: : : : :TELNET:1/20 switch_access_method = TELNET switch_save_config_interval = 600

ml2_conf.ini entries:

[OVS] bridge_mappings = physnet1:br-eth1 integration_bridge = br-int tenant_network_type = vlan network_vlan_ranges = physnet1:1005:1015 [SECURITYGROUP] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

Virtual-Chassis with LACP Topology
This configuration utilizes a combination of Link Aggregation and Virtual Chassis features to provide network redundancy.The example is implemented using a pair of OmniSwitch OS6900s. The topology outlined is limited to a single ‘edge’ switch for simplicity; however, this is the most complex configura- tion presented.

Prior to use the switches must be configured as a virtual chassis.



Link Aggregation features must also be enabled and configured on the switch prior to use by the Open- Stack instance. LACP is used to manage the network traffic on the aggregated links. The following commands will achieve the Link Aggregation configuration:

linkagg lacp agg 50 size 2 admin-state enable linkagg lacp agg 50 name "OpenStack LACP testing nova02" linkagg lacp agg 50 actor admin-key 50 linkagg lacp agg 51 size 2 admin-state enable linkagg lacp agg 51 name "OpenStack LACP testing nova01" linkagg lacp agg 51 actor admin-key 51 linkagg lacp agg 52 size 2 admin-state enable linkagg lacp port 1/1/19 actor admin-key 51 linkagg lacp port 1/1/20 actor admin-key 50 linkagg lacp port 2/1/16 actor admin-key 51 linkagg lacp port 2/1/17 actor admin-key 50 unp linkagg 50 unp linkagg 50 classification enable unp linkagg 51 unp linkagg 51 classification enable

The compute hosts require specialized configurations to support LACP also. While OVS directly supports link aggregation (and LACP), there have been issues with 802.1x VLAN tagging when combined with LACP. Therefore, the linux bonding driver is used to provide LACP support; OVS is configured to use the bond interface. The interface bonding is described by the following /etc/network/interfaces configura- tion elements: auto eth0 allow-bond0 eth0 iface eth2 inet manual bond-master bond0 auto eth2 allow-bond0 eth2 iface eth2 inet manual bond-master bond0 auto bond0 iface bond0 inet manual bond-mode 802.3ad bond-slaves none bond-miimon 100 bond-lacp-rate fast

The OpenVSwitch configuration required to utilize the bond0 interface defined is: ovs-vsctl show dbb9cf7f-3c38-42b2-831f-4ebf940d7c43 Bridge "br-bond0" Port "bond0" Interface "bond0" Port "phy-br-bond0" Interface "phy-br-bond0" Port "br-bond0" Interface "br-bond0" type: internal Bridge br-int Port br-int Interface br-int type: internal Port "int-br-bond0" Interface "int-br-bond0" ovs_version: "1.10.2"

The OONP will manage both the compute node 802.1q VLAN configuration using the vNP and mac- address classification mechanisms as in prior configurations.

omniswitch_network_plugin.ini entries:

[ml2_ale_omniswitch] omni_edge_devices = 10.255.205.106:OS6900: : : : :TELNET:50 51: dhcp_server_interface = 10.255.205.106:OS6900: : : : :TELNET:2/1/10 switch_save_config_interval = 600

ml2_conf.ini entries:

[OVS] bridge_mappings = physnet1:br-eth1 integration_bridge = br-int tenant_network_type = vlan network_vlan_ranges = physnet1:1100:1110 [SECURITYGROUP] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

VC with LACP Compute Node OVS Configuration File
ml2_conf.ini entries:

[OVS] bridge_mappings = physnet1:br-bond0 integration_bridge = br-int tenant_network_type = vlan network_vlan_ranges = physnet1:1100:1110 [SECURITYGROUP] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

VC with LACP Network Node OVS Configuration File
ml2_conf.ini entries:

[OVS] bridge_mappings = physnet1:br-eth1 integration_bridge = br-int tenant_network_type = vlan network_vlan_ranges = physnet1:1100:1110 [SECURITYGROUP] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

Example ml2_conf.ini File
Refer this link: ml2_conf.ini File