Jump to: navigation, search

Neutron/ML2/LenovoML2Mechanism

< Neutron‎ | ML2
Revision as of 05:10, 12 August 2015 by Larkland Morley (talk | contribs) (Lenovo Networking Plug-in for Openstack Neutron)

Lenovo Networking Plug-in for Openstack Neutron

Lenovo Networking contains the Lenovo vendor code for Openstack Neutron for Kilo or later


Overview


Openstack is an open source infrastructure initiative for creating and managing large groups of virtual private servers in a cloud computing environment. Lenovo’s Networking Neutron Plugin provides a means to orchestrate VLANs on Lenovo’s physical switches. In cloud environments where VMs are hosted by physical servers, the VMs see a new virtual access layer provided by the host machine.

This new access layer can be typically created via many mechanisms e.g. Linux Bridges or a Virtual Switches. The policies of the virtual access layer (virtual network), when set must now be coordinated with the policies set in the hardware switches. Lenovo’s Neutron Plugin helps in coordinating this behavior automatically without any intervention from the administrator. The illustration below provides an architectural overview of how Lenovo’s ML2 Plugin and switches fits into an Openstack deployment.

Lenovo ML2 Plugin Architecture

User Guide


The Lenovo Networking Openstack User Guide is provided to assist with installation and setup of this plugin - Download User Guide


Download Lenovo Plugin Code


The Lenovo Networking ML2 Plugin code is located on Github.


Lenovo Networking Products


Learn more about Lenovo Datacenter Switches on Lenovo Networking Website


Plugin Configuration

Two sections of the configurations will need to be modified manually in /etc/neutron/plugins/ml2/ml2_conf.ini The sections listed here should have Lenovo included in mechanism_drivers, and network_vlan_ranges need to be defined as in ml2_type_vlan section.

[ml2]
tenant_network_types = vlan
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,lenovo
# (ListOpt) List of network type driver entrypoints to be loaded from
# the neutron.ml2.type_drivers namespace.
#
# type_drivers = local,flat,vlan,gre,vxlan
# Example: type_drivers = flat,vlan,gre,vxlan

# (ListOpt) Ordered list of network_types to allocate as tenant
# networks. The default value 'local' is useful for single-box testing
# But provides no connectivity between hosts.
#
# tenant_network_types = local
# Example: tenant_network_types = vlan,gre,vxlan

# (ListOpt) Ordered list of networking mechanism driver entrypoints
# to be loaded from the neutron.ml2.mechanism_drivers namespace.
# mechanism_drivers =
# Example: mechanism_drivers = openvswitch,mlnx
# Example: mechanism_drivers = arista
# Example: mechanism_drivers = cisco,logger
# Example: mechanism_drivers = openvswitch,brocade
# Example: mechanism_drivers = linuxbridge,brocade

# (ListOpt) Ordered list of extension driver entrypoints
# to be loaded from the neutron.ml2.extension_drivers namespace.
# extension_drivers =
# Example: extension_drivers = anewextensiondriver

[ml2_type_vlan]
# (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
# specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each
# physical_network available for allocation as tenant networks.
#
# network_vlan_ranges =
# Example: network_vlan_ranges = physnet1:1000:2999,physnet2
network_vlan_ranges = default:1000:1999

Add the Lenovo switch information to the section ml2_mech_lenovo of this configuration file. Include the following information (see the example below):

  • The Hostname/IP address of the Switch
  • The hostname and port of the server(s) that is connected to the switch
  • The Lenovo switch credential username and password
  • Portchannel or LACP number for Host connected with VLAG
  • SSH Port number for Netconf (Typically 830)

The expectation is that there could be several servers to switch port mapping per switch only limited by number of available ports.

[ml2_mech_lenovo:10.240.179.65]
# Hostname and port used on the switch for this compute host.
nova-node-1 = portchannel:64
# Port number where the SSH will be running at the Nexus Switch. Default is 22 so this variable
# only needs to be configured if different.
ssh_port = 830
# Provide the Nexus log in information
username = admin
password = admin

[ml2_mech_lenovo:10.240.179.54]
# Hostname and port used on the switch for this compute host.
nova-node-1 = portchannel:64
# Port number where the SSH will be running at the Nexus Switch. Default is 22 so this variable
# only needs to be configured if different.
ssh_port = 830
# Provide the Nexus log in information
username = admin
password = admin

[ml2_mech_lenovo:10.240.179.54]
# Hostname and port used on the switch for this compute host.
nova-node-2 = 17
# Port number where the SSH will be running at the Nexus Switch. Default is 22 so this variable
# only needs to be configured if different.
ssh_port=830
# Provide the Nexus log in information
username = admin
password = admin

As more switches and servers are added to the network, the above file would need to be updated with these details. Once this configuration is done, it is now time to create networks from the Horizon dashboard or Openstack command line.