Jump to: navigation, search

Difference between revisions of "Neutron/ML2/ALE-Omniswitch"

< Neutron‎ | ML2
(Installing and Configuring the OONP)
(Installing and Configuring the OONP)
Line 302: Line 302:
 
   NEUTRON_PATH:               /usr/lib/python2.7/dist-packages/neutron
 
   NEUTRON_PATH:               /usr/lib/python2.7/dist-packages/neutron
 
   NEUTRON_CFG_PATH:       /etc/neutron
 
   NEUTRON_CFG_PATH:       /etc/neutron
   NEUTRON_USER:         neutron
+
   NEUTRON_USER:             neutron
 
   NEUTRON_GROUP:         neutron
 
   NEUTRON_GROUP:         neutron
 
   Ubuntu mods:               1
 
   Ubuntu mods:               1

Revision as of 06:38, 21 August 2015

OmniSwitch Plug-in for OpenStack Installation

The OmniSwitch OpenStack Networking Plug-in (OONP) for OpenStack offers infrastructure services for OpenStack logical networks by coordinating the orchestration of Alcatel-Lucent OmniSwitches as the underlying physical network. When used in conjunction with the OpenVSwitch plug-in, end-to-end multi-tenant network provisioning through OpenStack Networking (Quantum/ Neutron) is achieved.

The plug-in is intended to be installed in existing OpenStack environments to configure the underly- ing physical network for its cloud networking operations.

  • Note. This OONP release supports only the base plug-in features such as link aggregation, Virtual Chassis, VPA-based networking, and driver support for Telnet and REST. Advanced features such as UNP, MVRP, SPB, and QoS are not currently supported.


OmniSwitch OpenStack Networking Plug-in Architecture

The OpenStack Networking ML2 plug-in provides an extensible architecture that supports a variety of mechanism drivers for configuring physical networks. The architecture provides an environment where multiple independent drivers can be used to configure different network devices from different vendors. Each driver uses its own internal mechanism to communicate with their respective network elements. From the OpenStack Neutron-server perspective a single interface is provided through which the required networking services are provided to the OpenStack cloud applications. This allows OpenStack Network- ing to configure the physical network as well as the virtual switch instances running on the hypervisors. In addition, the OpenStack Networking L3-agent and DHCP-agent are fully supported.

The OmniSwitch Openstack Networking Plug-in work with the OVS ML2 driver

The OmniSwitch ML2 Mechanism Driver supports the following hardware platforms with its respective AOS software releases.

  • OS6900 and OS10K with AOS 732-R01 SW release and above
  • OS685X and OS9000 with AOS 645-R02 SW release and above
  • OS6250 and OS6450 with AOS 664-R01 and above, limited to the features supported on this platform
  • OS6860 and OS6860E with AOS 811-R01.


Basic topology.png

In this deployment scenario, the OmniSwitch ML2 Mechanism Driver uses configuration elements from the OpenVSwitch database to ensure configuration consistency and consistent VLAN ID assignment between it and the OVSNeutronPluginV2 plug-in.


VLAN Based Tenant Networks

The plug-in supports the VLAN network type. This means that logical networks (also referred to as “tenant networks”) are realized using VLANs in the OmniSwitch; thus, any operation related to tenant networks performed in OpenStack Networking is translated and configured in the OmniSwitch using VLANs. The VLAN id is obtained from the reserved VLAN range defined in the plug-in configuration file.

VLAN ID assignment is efficiently and intelligently used on switch ports by provisioning and de-provisioning VLANs across switches as virtual machines connected to tenant networks are created and destroyed.

Moreover, connectivity from the compute hosts to the physical network is trunked to allow traffic only from the VLANs configured on the host by the virtual switch.

The compute host VLAN trunk configuration on the OmniSwitch is automated through the use of the Alcatel-Lucent VNP mechanism. VNP classification is performed on the basis of either: the MAC address of the virtual machine; or the VLAN tag. While both methods are supported, only one can be used within an OpenStack installation instance. Network node connectivity (DHCP and L3 services) are automatically managed using VLAN-Port-Assignment.


Additional Supported Features

The following additional features are also supported in this release.

Device specific switch_access_method

The feature allows a different access method to be used per individual device in the topology (TELNET or REST) . If an AOS 6.X device is mistakenly configured to use REST interface, an ALERT message will be logged and TELNET will be used automatically on those devices.

  • Note. This parameter is added as an additional entry in the existing device configuration. Please refer to the .ini file for the proper format.

VPA based host classification method

In addition to supporting UNP profiles on the edge switches for the interfaces connected to the Compute nodes, static 802.1q tagged plain Vlan-Port-Association (VPA) method is also supported. This is useful mainly for supporting an OS6450 as an edge switch.

Switch save config interval

The range of time interval to save the switch configuration periodically is defined to be 600 - 1800 seconds. If any value is configured out of this range, an ALERT message is logged and the minimum value (600 seconds) is used automatically.


OmniSwitch OpenStack Networking Plug-in (OONP) Installation Overview

OmniSwitch OpenStack Networking Plug-in (OONP) provides OpenStack Networking (Neutron) with the ability to manage the Layer-2 configuration of OmniSwitch devices. The OONP plug-in supports the following features:

  • 802.1q VLAN based tenant networks.
  • Multiple physical topologies - ranging from a single switch to multi-switch based core-edge and spine- leaf topologies.
  • Edge port (Host VM connections) automatic configuration based on VLAN Port Assignment (VPA). The following product matrix shows which features are supported/used for physical OmniSwitch configuration:


Feature Mode/Product OS10K/OS6900 OS6860/6860E OS6850E/OS6850 OS6450
Switch Access Method ReST/Telnet ReST/Telnet Telnet Telnet
Edge Port Configuration VPA VPA VPA VPA
Uplink Port Configuration VPA VPA VPA VPA


While a mix of switch features is supported by the plug-in, only one Port Configuration method can be used in the OpenStack configuration. For example if you choose to use MVRP to configure Uplink Ports, all switches MUST support the MVRP feature. This is the case for BOTH the Edge Port Configuration and Uplink Port Configuration parameters.


OmniSwitch OpenStack Networking Plug-in Installation

The plug-in is delivered as a tar.gz file that contains the python modules, supporting applications which can be found on pypi site:

https://pypi.python.org/pypi/networking-ale-omniswitch

The plug-in is installed on the OpenStack controller node and can be performed by using pip command:

 pip install networking-ale-ominiswitch

Neutron must be configured to use the OONP at this point. The typical installation will use a combination of the OONP along with the OVS plug-in. The plug-in configuration is defined in two (2) files in the Ubuntu environment on the controller node:

1. /etc/default/neutron-server - the configuration files are specified using the following entries:

  # defaults for neutron-server
  # path to config file corresponding to the core_plugin specified in neutron.conf
  NEUTRON_PLUGIN_CONFIG=/etc/neutron/plugins/ml2/ml2_conf.ini
  # Edits for OONP plugin conf file inclusion
  OONP_CONFIG="--config_file /etc/neutron/plugins/ml2/omniswitch_network_plugin.ini"
  NEUTRON_PLUGIN_CONFIG="${NEUTRON_PLUGIN_CONFIG} ${OONP_CONFIG}"

2. /etc/neutron/neutron.conf - the plug-in configuration is specified using the following entries:

  core_plugin = ml2

Manual Changes Required for Neutron Configuration

After the successful installation of OONP plug-in, manual changes are required for the Neutron’s configu- ration. The required manual changes are marked in red below:

1. In ‘/etc/neutron/plugins/ml2/ml2_conf.ini’, update the following line to include omniswitch as below.

  mechanism_drivers = openvswitch,omniswitch

2. Update the OONP’s configuration file with the topology details and configuration options that you want to use.

  /etc/neutron/plugins/ml2/omniswitch_network_plugin.ini

3. Restart the neutron-server.

  > service neutron-server restart

After running the above steps, you are now ready to use Openstack Neutron APIs and/or Horizon web UI or CLI to manage the cloud network.

OONP Installation Notes

1. The omniswitch_topology_utils has additional options that can be used with the OmniSwitch in addition to the Neutron APIs, CLIs or Horizon UI.

To remove the dynamic tenant configurations created by the Openstack plug-in on the OmniSwitch, use the following option (useful in a test environment):

  > python omniswitch_topology_utils.py clear_tenant

To save the modifications made on the OmniSwitch, use the following option:

  > python omniswitch_topology_utils.py save

2. The logs of the OONP (plug-in and driver operations) can be viewed at:/var/log/neutron/server.log.

OmniSwitch OpenStack Networking Plug-in Configuration

The plug-in requires configuration data about the plug-in operational details, details of the devices, the physical network topology, and the mechanisms to be used to provide the end-to-end connectivity across core and edge network switches.

When the plug-in is installed, the configuration file will have “default” values in it. These values must modified by the user before using the plug-in.

Any changes made in the configuration file during run-time require the Neutron-server to be restarted for the changes to take effect.

Below is an example omniswitch_network_plugin.ini file. Additionally refer to the “OmniSwitch OONP Installation and Multi-Switch Edge-Core Topology Example” chapter for example topologies and their associated configuration parameters.

  [DEVICE]
  
  # This is used to define the edge switches and which ports the compute/network
  # nodes are attached to. The entry may contain 1 or more device definitions.
  # The definition is:
  # <switch-ip>:<switch-type>:<user-name>:<password>:<command-prompt>:<access_method>
  # <node-interaces>:<core-interfaces>, with blank entries for user-name, password,
  # or command-prompt specifying default values. A blank value for access_method will
  # result in using the global switch_access_method
  
  omni_edge_devices = 192.168.222.33:OS6900:	:	:	:	:1/19,
  192.168.222.35:OS6900:	:	:	:	:1/16:1/20
  
  # Used to define 0, or more, core switch configurations; the entry follows the same
  # format as that of the omni_edge_device.
  
  omni_core_devices = 192.168.222.34:OS6900:	:	:	:	:1/19 1/20
  
  # This is used to specify which switch and port the DHCP server (network node) is
  # connected.
  
  dhcp_server_interface = 192.168.222.33:	:	:	:	:1/18
  
  # The default global method to access devices if not overridden by the
  # switch_access_method
  # parameter of the omni_edge_devices, omni_core_devices, and/or
  # dhcp_server_interface.
  # <TELNET|REST>
  
  switch_access_method = TELNET
  
  
  # SWITCH_SAVE_CONFIG_INTERVAL:
  # This is used to specify how often (in seconds) the config changes in the switches
  # are to be saved.
  # The valid range is 600 - 1800 seconds. If value is out of range, it will default
  # to 600 seconds. switch_save_config_interval = 600


Topology Examples

This chapter presents example OpenStack topologies with the associated OmniSwitch OpenStack Networking Plug-in (OONP) configurations. The configurations presented are based on two (2) variants of topologies:

  • A Single OS6900 switch
  • Multiple OS6900 switches.

Network configuration and management is achieved by using a combination of the OONP to configure the physical switches and OpenVSwitch (OVS) to configure the compute and network nodes.

The OONP supports tenant network isolation using VLANs; therefore, the OpenStack instance must generally be configured to use VLANs. These configurations do not use the flat-dhcp or tunnel based tenant topologies (GRE or VxLAN).

The OONP supports multiple OmniSwitch product families as well as a variety of configuration and management methods. Generally, only one classification method and one uplink management method is available for a given OpenStack instance. For example, if VLAN classification is selected as the edge port configuration method it will be used for all edge switch configurations. Likewise if MVRP is chosen as the uplink port configuration method; MVRP will be used for all switches. This results in an OONP configura- tion that utilizes the common configuration options amongst all of the switches within the physical topology.

The configurations presented in this document are based on the OONP_K_R01_3 release for OpenStack Liberty.


OmniSwitch OONP Installation and Multi-Switch Edge-Core Topology Example

This topology utilizes three (3) OmniSwitch 6900s in a core-edge configuration. Due to the homogeneous use of the OS9600 switch in this topology, the advanced configuration features of MVRP and vNP can be utilized.

The physical network is composed of two (2) 'edge switches', which provide end-station connectivity; and one (1) 'core switch'. It should be noted that 'edge-switches' have OpenStack nodes (and possibly other computing devices) connected to them; while 'core-switches' are connected only to other switches; either 'cores' or 'edges'.

The compute node VLAN trunk configuration on the OmniSwitch is automated through the use of the Alcatel-Lucent vNP mechanism. The VM mac-address will be used to identify and create the correct 802.1q VLAN tag configuration on the edge port. (Called mac-address classification).

The Network node VLAN trunk configuration is managed by the OmniSwitch plug-in using VLAN-Port- Assignment (VPA). (This is required because vNP classification occurs on ingress and the dhcp server, located on the network node, must have static VLAN connectivity to receive the tenant VM dhcp DISCOVER broadcast message).

VLAN uplink connectivity between the edge switches and the core switch is learned and configured auto- matically using MVRP.

This example provides steps on installing and configuring OONP. The physical configuration is composed of: two (2) compute nodes; One (1) network node; and a separate controller node. There are three (3) physical networks (refer to “Figure: Multi-Switch Edge-Core Topology" ):

  • The 'public' network, which has connections to both the controller and network nodes
  • The internal management network, which has connections to all devices and nodes
  • The 'private' tenant network, which has connections to the compute and network nodes

The following switch configuration assumptions are used in the example:

  • The management/admin network is 10.1.2.0/24
  • The EMP is used as the management interface
  • The tenant VLAN range is 1005-1015
  • The OmniSwitch device plug-in will use the factory default credentials to login to the switches (admin:switch)
  • The configuration assumes the factory default prompt on all of the switches (->)
  • The OmniSwitch device plug-in will use the telnet driver to communicate with the switches.

Refer to the Multi-Switch Topology OONP configuration file example for the following parameters:

  • The general OmniSwitch plug-in configuration elements selecting: MAC_ADDRESS qualification; MVRP configuration; and TELNET switch communication methods are shown green.
  • The VLAN range configuration, mapping the usage of VLANs 1005-1015 is shown in red.

The switch connection topology in “Figure: Multi-Switch Edge-Core Topology" is mapped directly into the omni_edge_device, omni_core_device, and dhcp_server_interface configuration parameters, shown in blue in the example OONP configuration file. Each parameter may have multiple switch definitions separated by commas ','. The configuration attributes for the switch definition are as follows:

  <Switch-IP>:<Switch-Type>:<User>:<Password>:<Prompt>:
  <Switch-access-method>:<Node-interfaces>:<Core-interfaces>

In this example the factory defaults are used for authentication credentials and the default CLI prompt. Additionally the plugin global switch_access_method will be used; producing an entry of the following format::

  n.n.n.n:OS6900: : : : :X:Y, where;

n.n.n.n is the switch management IP address,

X is the compute or network node endpoint connection port (in slot/port format), Y is the inter-switch (uplink) connection port.

  • Note. The device specifications may include multiple interface ports for both the endpoint and inter- switch interfaces separated by spaces. Additionally, the network node is specified separately in the dhcp_server_interface parameter and should not be included in the *_device configuration parameters.

The corresponding OVS agent configuration file for this example is shown in configuration example. It should be noted that the tenant_network_type and network_vlan_ranges parameters duplicate the definitions in the OONP configuration file found on the controller node.

Multi-Switch Edge-Core Topo.jpg

omniswitch_network_config.ini entries:

  [DEVICE]
 omni_edge_devices = 10.1.2.33:OS6900: : : : :1/16:1/19,
 10.1.2.35:OS6900: : : : :1/16:1/20
 omni_core_devices = 10.1.2.34:OS6900: : : : :1/19 1/20
 dhcp_server_interface = 10.1.2.33:OS6900: : : : :1/15
 switch_access_method = TELNET
 switch_save_config_interval = 600
 

ml2_conf.ini entries:

 [OVS]
 bridge_mappings = physnet1:br-eth1
 integration_bridge = br-int
 tenant_network_type = vlan
 network_vlan_ranges = physnet1:1005:1015
 
 [SECURITYGROUP]
 firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
 root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

Installing and Configuring the OONP

To install and configure the OONP plug-in do the following.

1. Download the OONP_K_R01_3.tar.gz plug-in file and the omniplugin_install_common shell script from Alcatel Lucent Service & Support website at https://service.esd.alcatel-lucent.com/. Ensure that you download the latest version of the plug-in.

2. Copy both the plug-in package and the installation script into /tmp directory.

3. As root run ./omniplugin_install_common script from /tmp directory. When the script is executed, the output is shown as follows:

  root@os2-cntl-k:~/Liberty_oonp_pkg# date Thu Jul	9 16:57:11 PDT 2015
  root@os2-cntl-k:~/Liberty_oonp_pkg#
  root@os2-cntl-k:~/Liberty_oonp_pkg# ./omniplugin_install_common OONP_K_R01_3.tar.gz
  Installation parameters:
  OONP_PKG:	              OONP_K_R01_3
  OONP_INST_VER:	      K(Liberty)
  OONP_SRC_PATH:	      /root/Liberty_oonp_pkg
  OONP_CFG:	              omniswitch_network_plugin.ini
  OONP_CFG_PATH:	      /etc/neutron/plugins/ml2
  NEUTRON_PATH:	              /usr/lib/python2.7/dist-packages/neutron
  NEUTRON_CFG_PATH:	      /etc/neutron
  NEUTRON_USER:	            neutron
  NEUTRON_GROUP:	         neutron
  Ubuntu mods:	              1
  myname:	                omniplugin_install_common
  mydir:	                       .
  mylog:	                      /tmp/omniplugin_install_common-5860.log
  mydate:                       09/07/2015-16:56
  debug:	                      2
  quiet:	                             0
  donothing:	                     0
  ******** OmniSwitch Network Plug-in Installation Begin... ******** 
  Creating omniswitch mechanism dir
     </usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/omniswitch> ... 
  Installing mechanism & config file(s)...
     Extracting package <OONP_K_R01_3> into
  </usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/omnis- witch>...LICENSE
  NOTICE
  __init__ .py
  config.py
  consumer.py
  official_build_num
  omniswitch_constants.py
  omniswitch_db_v2.py
  omniswitch_device_plugin.py
  omniswitch_isid_table
  omniswitch_mechanism_driver.py
  omniswitch_ml2_db.py
  omniswitch_network_plugin.ini
  omniswitch_network_plugin.py
  omniswitch_neutron_dbutils.py
  omniswitch_restful_driver.py
  omniswitch_setup
  omniswitch_ssh_driver.py
  omniswitch_telnet_driver.py
  omniswitch_topology_utils.py
  OKAY
     Setting file ownerships and permissions...OKAY
 
  Adding omniplugin class to neutron entry_points.txt file...[neutron.ml2.mecha- nism_drivers]
  OKAY
  Installing mechanism config file...
     Creating mechanism config dir </etc/neutron/plugins/ml2>...EXISTS
     Saving existing plugin configuration file <omniswitch_network_plugin.ini>...SKIPPED
     Copying omniswitch_network_plugin.ini into </etc/neutron/plugins/ml2>...OKAY
  Performing Ubuntu specific configuration steps...
     Saving existing /etc/default/neutron-server...
     Modifying neutron startup defaults file /etc/default/neutron-server...
     Creating upstart file /etc/init/neutron-server.override...
  OKAY
  *********** OmniSwitch Network Plug-in Installation End. *************** root@os2-cntl-k:~/Liberty_oonp_pkg#

4. Edit the Neutron-server's configuration file to use the OmniSwitch Network plug-in. Edit "/etc/ neutron/neutron.conf" and update the 'core_plugin' parameter as follows:

  core_plugin = ml2

5. Edit "/etc/default/neutron-server" to indicate the plug-in configuration file to Neutron-server. Update the file with:

  NEUTRON_PLUGIN_CONFIG= /etc/neutron/plugins/ml2/ml2_conf.ini
  # Edits for OONP plugin conf file inclusion
  OONP_CONFIG="--config_file /etc/neutron/plugins/ml2/omniswitch_network_plugin.ini"
  NEUTRON_PLUGIN_CONFIG="${NEUTRON_PLUGIN_CONFIG} ${OONP_CONFIG}"

6. Edit omniswitch_network_plugin.ini file to match Liberty topology port configuration.

7. Restart neutron server on controller (as root run "service neutron-server restart").

Multi-Switch All Edge Topology Example

This topology consists of three (3) OmniSwitch OS6900s in a mesh interconnect. Each of the switches has at least one (1) OpenStack node connected to it (either compute or network). This configuration does not contain a core switch so the configuration file will not have the omni_core_devices configuration element.

The compute node VLAN trunk configuration on the OmniSwitch is automated through the use of the Alcatel-Lucent vNP mechanism. The incoming 802.1q tag will be used to identify and create the correct 802.1q VLAN tag configuration on the edge port. (Called vlan classification)

As in the previous topology, the Network node VLAN trunk configuration is managed by the OmniS- witch plug-in using VPA.

The inter-switch link 802.1q configuration is also managed using VPA.

Multi-Switch all Edge Topo.jpg

Starting with a functioning OpenStack Liberty configuration (with a functioning Neutron OVS networking infrastructure). The instance is composed of: three (3) compute nodes; network node, and the controller. The instance is composed of three (3) networks:

The inter-switch link 802.1q configuration is also managed using VPA.

  • The ‘public’ network, which has connections to both the controller and network nodes
  • The internal management network, which has connections to all devices and nodes
  • The ‘private’ tenant network, which has connections to the compute and network nodes

The following switch configuration assumptions are used in the example:

  • The management/admin network is 192.168.2.0/24
  • The EMP is used as the management interface
  • The tenant VLAN range is 1005-1015
  • The OmniSwitch device plug-in will use the username of OpenStack and the password of secret to login to the switches
  • The configuration assumes the factory default prompt on all of the switches (->)
  • The OmniSwitch device plug-in will use the telnet driver to communicate with all of the switches (as specified by the plugin global switch_access_method. However, the REST method will be used to control the switch at 192.168.2.51.

Refer to the Multi-Switch Topology OONP configuration file example for the following parameters:

  • The general OmniSwitch plug-in configuration elements selecting: VLAN qualification; VPA configu- ration; and TELNET switch communication methods are shown green.
  • The VLAN range configuration, mapping the usage of VLANs 1005-1015 is shown in red.


The switch connection topology in “Figure: Multi-Switch All Edge Topology” is mapped directly into the omni_edge_device and dhcp_server_interface configuration parameters, shown in blue in the example OONP configuration file. Note that omni_core_device element is absent from the configu- ration. The configuration attributes for the switch definitions are as follows:

In this example the authentication credentials of openstack:secret and the default CLI prompt is used; producing an entry of the following format:

n.n.n.n:OS6900:openstack:secret: : :X:Y , where;
n.n.n.n is the switch management IP address,
X is the compute or network node endpoint connection port (in slot/port format), Y is the inter-switch (uplink) connection port.

omniswitch_network_config.ini entries:

  [DEVICE]
  omni_edge_devices = 192.168.2.50:OS6900:openstack:secret: : :1/16:1/19 1/20,
                                      192.168.2.51:OS6900:openstack:secret: : :REST:1/16:1/19 1/20,
                                      192.168.2.52:OS6900:openstack:secret: : :1/16:1/19 1/20 
  dhcp_server_interface = 192.168.2.50:OS6900:openstack:secret: : :1/15
  switch_access_method = TELNET
  switch_save_config_interval = 600

ml2_conf.ini entries:

  [OVS]
  bridge_mappings = physnet1:br-eth1
  integration_bridge = br-int
  tenant_network_type = vlan
  network_vlan_ranges = physnet1:1005:1015
 
  [SECURITYGROUP]
  firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
  root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

Single-Switch Topology Example

This configuration utilizes a single OmniSwitch OS6900 to provide the OpenStack tenant network inter- connectivity. The configuration is limited to a single ‘edge’ switch.

The compute node VLAN trunk configuration on the OmniSwitch is automated through the use of the Alcatel-Lucent vNP mechanism. The VM mac-address will be used to identify and create the correct 802.1q VLAN tag configuration on the host port.

The Network node VLAN trunk configuration is managed by the OmniSwitch plug-in using VLAN-Port- Assignment. (This is required because vNP classification occurs on ingress and the dhcp server located on the network node must have VLAN connectivity to receive the tenant VM dhcp DISCOVER broadcast message)

Starting with a functioning OpenStack Liberty configuration (with a functioning Neutron OVS networking infrastructure). The instance is composed of: two (2) compute nodes; and a combined controller/network node. The instance is composed of three (3) networks:

  • The ‘public’ network, which has a connection to the controller/network node;
  • The internal management network, which has connections to the OmniSwitch devices and nodes;
  • The ‘private’ data network, with connections to the compute and controller/network nodes.

The following switch configuration assumptions are used in the example:

  • The management/admin network is 192.168.1.0/24
  • The EMP is used as the management interface
  • The tenant VLAN range is 1005-1015
  • The OmniSwitch device plug-in will use the factory default credentials to login to the switch (admin:switch)
  • The configuration assumes the factory default prompt on the switch
  • The OmniSwitch device plug-in will use the telnet driver to communicate with the switch

Refer to the Single-Switch Topology OONP configuration file example for the following parameters:

Single Switch Topo.jpg

  • The general OmniSwitch plug-in configuration elements selecting: MAC_ADDRESS qualification and TELNET switch communication methods are shown green.
  • The VLAN range configuration, mapping the usage of VLANs 1005-1015 is shown in red.

The switch connection topology in “Figure: Single Switch Topology” is mapped directly into the omni_edge_devices and dhcp_server_interface configuration parameters, shown in blue in the example OONP configuration file.

In this example the factory defaults are used for authentication credentials and the default CLI prompt; additionally, the plugin global switch_access_method will be used, However; this example does NOT make use of core-switch connections, producing an entry of the following format: n.n.n.n:OS6900: : : : :x: , where;

n.n.n.n is the switch management IP address,
x is the compute or network node endpoint connection port (in slot/port format),
Note the inter-switch (uplink) connection port is left blank.
  • Note. The device specifications includes multiple interface ports for both the endpoint interfaces sepa- rated by spaces ' '. Additionally, the network node is specified separately in the dhcp_server_interface parameter and should not be included in the *_device configuration parameters.

The corresponding OVS agent configuration file is shown in this example. It should be noted that the tenant_network_type and network_vlan_ranges parameters mirror the definitions in the OONP configu- ration file found on the controller node. Also pay special attention to the lack of the omni_core_devices and core_network_config configuration elements.

omniswitch_network_config.ini entries:

  [DEVICE]
  omni_edge_devices = 192.168.1.10:OS6900: : : : :1/15 1/16:
  dhcp_server_interface = 192.168.1.10:OS6900: : : : :1/20
  switch_access_method = TELNET
  switch_save_config_interval = 600

ml2_conf.ini entries:

  [OVS]
  bridge_mappings = physnet1:br-eth1
  integration_bridge = br-int
  tenant_network_type = vlan
  network_vlan_ranges = physnet1:1005:1015
  
  [SECURITYGROUP]
  firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
  root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf