Jump to: navigation, search

Difference between revisions of "Cisco-neutron"

m (Older Releases)
 
(8 intermediate revisions by 4 users not shown)
Line 1: Line 1:
= A Cisco Plugin Framework for Quantum L2 Network Overlays Spanning Multiple Physical Switches (Havana Release) =
+
= A Cisco Plugin Framework for Neutron L2 Network Overlays Spanning Multiple Physical Switches (Havana Release) =
  
 
== Introduction ==
 
== Introduction ==
 
This plugin implementation provides the following capabilities:
 
This plugin implementation provides the following capabilities:
  
* A reference implementation for a Quantum Plugin Framework (For details see: http://wiki.openstack.org/quantum-multi-switch-plugin)
+
* A reference implementation for a Neutron Plugin Framework (For details see: http://wiki.openstack.org/quantum-multi-switch-plugin)
 
* Supports multiple switches in the network
 
* Supports multiple switches in the network
 
* Supports multiple models of switches concurrently
 
* Supports multiple models of switches concurrently
 
* Supports use of multiple L2 technologies
 
* Supports use of multiple L2 technologies
* Supports the Cisco Nexus family of switches (Verified with Nexus 3000, 5000 and 7000 series)
+
* Supports the Cisco Nexus family of switches (Verified with Nexus 3000, 5000, 7000, and 9000 series)
 
 
 
 
  
 
== Overlay Architecture ==
 
== Overlay Architecture ==
Line 41: Line 39:
  
 
== Module Structure ==
 
== Module Structure ==
* quantum/plugins/cisco/      - Contains the Network Plugin Framework
+
* neutron/plugins/cisco/      - Contains the Network Plugin Framework
 
** /client - CLI module for core and extensions API
 
** /client - CLI module for core and extensions API
 
** /common - Modules common to the entire plugin
 
** /common - Modules common to the entire plugin
Line 52: Line 50:
  
 
== Basic Plugin Configuration ==
 
== Basic Plugin Configuration ==
1.  Make a backup copy of /etc/quantum/quantum.conf
+
1.  Make a backup copy of /etc/neutron/neutron.conf
  
2.  Edit /etc/quantum.conf and edit the "core_plugin" for v2 API. Also verify/add keystone information.
+
2.  Edit /etc/neutron.conf and edit the "core_plugin" for v2 API. Also verify/add keystone information.
  
 
<pre><nowiki>
 
<pre><nowiki>
core_plugin = quantum.plugins.cisco.network_plugin.PluginV2
+
core_plugin = neutron.plugins.cisco.network_plugin.PluginV2
  
 
[keystone_authtoken]
 
[keystone_authtoken]
Line 70: Line 68:
 
3.  MySQL database setup:
 
3.  MySQL database setup:
  
* 3a.  Create quantum_l2network database in mysql with the following command -
+
* 3a.  Create neutron_l2network database in mysql with the following command -
  
 
<pre><nowiki>
 
<pre><nowiki>
mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"
+
mysql -u<mysqlusername> -p<mysqlpassword> -e "create database neutron_l2network"
 
</nowiki></pre>
 
</nowiki></pre>
  
* 3b.  Enter the quantum_l2network database configuration info in the [DATABASE] section of the /etc/quantum/plugins/cisco/cisco_plugins.ini file.
+
* 3b.  Enter the neutron_l2network database configuration info in the [DATABASE] section of the /etc/neutron/plugins/cisco/cisco_plugins.ini file.
  
  
 
4. Configure the model layer to use Openvswitch as the vswitch plugin:
 
4. Configure the model layer to use Openvswitch as the vswitch plugin:
  
* Update the "vswitch_plugin" value of the [CISCO_PLUGINS] section of /etc/quantum/plugins/cisco/cisco_plugins.ini:
+
* Update the "vswitch_plugin" value of the [cisco_plugins] section of /etc/neutron/plugins/cisco/cisco_plugins.ini:
 
   
 
   
 
<pre><nowiki>
 
<pre><nowiki>
vswitch_plugin=quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
+
vswitch_plugin=neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
 
</nowiki></pre>
 
</nowiki></pre>
  
Line 90: Line 88:
 
In this mode the Nexus switch doesn't configure anything and acts as a simple passthrough.
 
In this mode the Nexus switch doesn't configure anything and acts as a simple passthrough.
  
* Configure the OVS plugin with the following settings in /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini:
+
* Configure the OVS plugin with the following settings in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:
  
 
<pre><nowiki>
 
<pre><nowiki>
[OVS]
+
[ovs]
 
tenant_network_type = gre
 
tenant_network_type = gre
 
enable_tunneling = True
 
enable_tunneling = True
Line 100: Line 98:
 
</nowiki></pre>
 
</nowiki></pre>
  
 +
* Modify the [cisco] section of the /etc/neutron/plugins/cisco/cisco_plugins.ini to add model class and the fake nexus driver:
 +
 +
<pre><nowiki>
 +
[cisco]
 +
model_class=neutron.plugins.cisco.models.virt_phy_sw_v2.VirtualPhysicalSwitchModelV2
 +
nexus_driver=neutron.plugins.cisco.test.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver
 +
</nowiki></pre>
 +
 +
When you start neutron-server, make sure to include --config-file arguments for both files:
  
* Modify the [CISCO] section of the /etc/quantum/plugins/cisco/cisco_plugins.ini to add model class and the fake nexus driver:
+
<pre><nowiki>
 +
/usr/bin/python /usr/bin/neutron-server --config-file /etc/neutron/neutron.conf --log-file /var/log/neutron/server.log --config-file /etc/neutron/plugins/cisco/cisco_plugins.ini --config-file/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
 +
</nowiki></pre>
 +
 
 +
Alternately, you can include the <nowiki>[ovs]</nowiki> section directly in /etc/neutron/plugins/cisco/cisco_plugins.ini:
  
 
<pre><nowiki>
 
<pre><nowiki>
[CISCO]
+
[cisco]
model_class=quantum.plugins.cisco.models.virt_phy_sw_v2.VirtualPhysicalSwitchModelV2
+
model_class=neutron.plugins.cisco.models.virt_phy_sw_v2.VirtualPhysicalSwitchModelV2
nexus_driver=quantum.plugins.cisco.test.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver
+
nexus_driver=neutron.plugins.cisco.test.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver
 +
 
 +
[ovs]
 +
tenant_network_type = gre
 +
enable_tunneling = True
 +
tunnel_id_ranges = 1:1000
 +
local_ip = 172.29.74.73
 
</nowiki></pre>
 
</nowiki></pre>
  
 +
Then start neutron-server with one less --config-file argument:
 +
 +
<pre><nowiki>
 +
/usr/bin/python /usr/bin/neutron-server --config-file /etc/neutron/neutron.conf --log-file /var/log/neutron/server.log --config-file /etc/neutron/plugins/cisco/cisco_plugins.ini
 +
</nowiki></pre>
  
 
== Cisco Plugin Overlay in Openvswitch VLAN Mode ==
 
== Cisco Plugin Overlay in Openvswitch VLAN Mode ==
  
* Configure the OVS plugin with the following settings in /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini:
+
* Configure the OVS plugin with the following settings in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:
  
 
<pre><nowiki>
 
<pre><nowiki>
[OVS]
+
[ovs]
 
bridge_mappings = physnet1:br-eth1
 
bridge_mappings = physnet1:br-eth1
 
network_vlan_ranges = physnet1:1000:1100
 
network_vlan_ranges = physnet1:1000:1100
Line 122: Line 144:
  
  
* Configure the [CISCO_PLUGINS] of '''/etc/quantum/plugins/cisco/cisco_plugins.ini:'''
+
* Configure the [cisco_plugins] of '''/etc/neutron/plugins/cisco/cisco_plugins.ini:'''
  
 
<pre><nowiki>
 
<pre><nowiki>
[CISCO_PLUGINS]
+
[cisco_plugins]
#nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
+
#nexus_plugin=neutron.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
vswitch_plugin=quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
+
vswitch_plugin=neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
 
</nowiki></pre>
 
</nowiki></pre>
  
  
* Configure the Nexus switch information in /etc/quantum/plugins/cisco/cisco_plugins.ini. The format should include the IP address of the switch, a host that's connected to the switch and the port on the switch that host is connected to. Also, add the Nexus switch credential username and password. You can configure multiple switches as well as multiple hosts per switch as shown in the example below:
+
* Configure the Nexus switch information in /etc/neutron/plugins/cisco/cisco_plugins.ini. The format should include the IP address of the switch, a host that's connected to the switch and the port on the switch that host is connected to. Also, add the Nexus switch credential username and password. You can configure multiple switches as well as multiple hosts per switch as shown in the example below:
  
 
<pre><nowiki>
 
<pre><nowiki>
Line 155: Line 177:
 
username=admin
 
username=admin
 
password=mySecretPasswordForNexus
 
password=mySecretPasswordForNexus
 +
</nowiki></pre>
 +
 +
 +
* Make sure that SSH host key of all Nexus switches is known to the host on which you are running the Neutron service.  You can do this simply by logging in to your Neutron host as the user that Neutron runs as and SSHing to the switches at least once.  If the host key changes (e.g. due to replacement of the supervisor or clearing of the SSH config on the switch), you may need to repeat this step and remove the old hostkeys from ~/.ssh/known_hosts.
 +
 +
* In general, make sure that every Nexus switch  used in your system, has a credential entry in the above file. This is required for the system to be able to communicate with those switches.
  
 +
* Start the Neutron service.  If something doesn't work, verify the configuration of each of the above files.  When you start neutron-server, make sure to include --config-file arguments for both files:
 +
 +
<pre><nowiki>
 +
/usr/bin/python /usr/bin/neutron-server --config-file /etc/neutron/neutron.conf --log-file /var/log/neutron/server.log --config-file /etc/neutron/plugins/cisco/cisco_plugins.ini --config-file/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
 
</nowiki></pre>
 
</nowiki></pre>
  
 +
Alternately, you can include the <nowiki>[ovs]</nowiki> section directly in /etc/neutron/plugins/cisco/cisco_plugins.ini:
 +
 +
<pre><nowiki>
 +
[ovs]
 +
bridge_mappings = physnet1:br-eth1
 +
network_vlan_ranges = physnet1:1000:1100
 +
tenant_network_type = vlan
 +
 +
[cisco_plugins]
 +
#nexus_plugin=neutron.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
 +
vswitch_plugin=neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
  
* Make sure that SSH host key of all Nexus switches is known to the host on which you are running the Quantum service.  You can do this simply by logging in to your Quantum host as the user that Quantum runs as and SSHing to the switches at least once.  If the host key changes (e.g. due to replacement of the supervisor or clearing of the SSH config on the switch), you may need to repeat this step and remove the old hostkeys from ~/.ssh/known_hosts.
+
[NEXUS_SWITCH:1.1.1.1]
 +
# Hostname and port used of the node
 +
compute-1=1/1
 +
# Hostname and port used of the node
 +
compute-2=1/2
 +
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
 +
ssh_port=22
 +
# Provide the Nexus credentials, if you are using Nexus switches. If not this will be ignored.
 +
username=admin
 +
password=mySecretPasswordForNexus
  
* In general, make sure that every Nexus switch  used in your system, has a credential entry in the above file. This is required for the system to be able to communicate with those switches.
+
[NEXUS_SWITCH:2.2.2.2]
 +
# Hostname and port used of the node
 +
compute-3=1/15
 +
# Hostname and port used of the node
 +
compute-4=1/16
 +
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
 +
ssh_port=22
 +
# Provide the Nexus credentials, if you are using Nexus switches. If not this will be ignored.
 +
username=admin
 +
password=mySecretPasswordForNexus
 +
</nowiki></pre>
 +
 
 +
Then start neutron-server with one less --config-file argument:
 +
 
 +
<pre><nowiki>
 +
/usr/bin/python /usr/bin/neutron-server --config-file /etc/neutron/neutron.conf --log-file /var/log/neutron/server.log --config-file /etc/neutron/plugins/cisco/cisco_plugins.ini
 +
</nowiki></pre>
 +
 
 +
== Cisco Plugin vPC (Virtual Port Channel) mode ==
 +
 
 +
The Cisco plugin supports multi homes hosts in a vPC setup. A typical vPC setup is illustrated in the following diagram:
 +
 
 +
[[File:Cisco-plugin-vpc.png|thumbnail|Multi Homed vPC hardware configuration]]
 +
 
 +
=== Prerequisites===
 +
* The Cisco plugin will not setup vPC interconnect channels between switches. This needs to be manually performed according to this document: [http://www.cisco.com/en/US/docs/switches/datacenter/nexus3000/sw/layer2/503_U2_1/b_Cisco_n3k_layer2_config_gd_503_U2_1_chapter_01000.html NXOS vPC configuration]
 +
*The data interfaces on the host must be bonded and this bonded interface should be attached to the external bridge.
 +
 
 +
===Plugin Configuration===
 +
* To configure vPC in the plugin, you need to inform the plugin of multiple connections per host. e.g. If host 1 is connected to two nexus switches 1.1.1.1 and 2.2.2.2 over portchannel2; your config will be:
 +
<pre><nowiki>
 +
[NEXUS_SWITCH:1.1.1.1]
 +
# Hostname and port used of the node
 +
host1=portchannel:2
 +
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
 +
ssh_port=22
 +
# Provide the Nexus credentials, if you are using Nexus switches. If not this will be ignored.
 +
username=admin
 +
password=mySecretPasswordForNexus
 +
 
 +
[NEXUS_SWITCH:2.2.2.2]
 +
# Hostname and port used of the node
 +
host1=portchannel:2
 +
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
 +
ssh_port=22
 +
# Provide the Nexus credentials, if you are using Nexus switches. If not this will be ignored.
 +
username=admin
 +
password=mySecretPasswordForNexus
 +
</nowiki></pre>
  
* Start the Quantum service. If something doesn't work, verify the configuration of each of the above files.
+
* The etherytype (portchannel, etherchannel etc.) needs to be specified for a vPC setup otherwise the plugin will assume an ethertype of Ethernet.
 +
* Non-vpc setups are not affected by this feature, there is no configuration change.
  
 
== How to Test the Installation ==
 
== How to Test the Installation ==
The unit tests are located at quantum/tests/unit/cisco/. They can be executed from the top level Quantum directory using tox (<pre>[sudo] pip install pip testrepository</pre>)
+
The unit tests are located at neutron/tests/unit/cisco/. They can be executed from the top level Neutron directory using tox (<pre>[sudo] pip install pip testrepository</pre>)
  
 
1. Testing the core API (without UCS/Nexus/RHEL device sub-plugins configured):
 
1. Testing the core API (without UCS/Nexus/RHEL device sub-plugins configured):
  
* By default all the device sub-plugins are disabled (commented out) in etc/quantum/plugins/cisco/cisco_plugins.ini
+
* By default all the device sub-plugins are disabled (commented out) in etc/neutron/plugins/cisco/cisco_plugins.ini
  
 
<pre><nowiki>
 
<pre><nowiki>
   tox -e py27 -- quantum.tests.unit.cisco.test_network_plugin
+
   tox -e py27 -- neutron.tests.unit.cisco.test_network_plugin
   tox -e py27 -- quantum.tests.unit.cisco.test_nexus_plugin
+
   tox -e py27 -- neutron.tests.unit.cisco.test_nexus_plugin
 
</nowiki></pre>
 
</nowiki></pre>
  
Line 180: Line 281:
 
2. For testing the Nexus device sub-plugin perform the following configuration:
 
2. For testing the Nexus device sub-plugin perform the following configuration:
  
* Edit etc/quantum/plugins/cisco/cisco_plugins.ini to add: In the [CISCO_PLUGINS] section add:
+
* Edit etc/neutron/plugins/cisco/cisco_plugins.ini to add: In the [CISCO_PLUGINS] section add:
  
 
<pre><nowiki>
 
<pre><nowiki>
nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
+
nexus_plugin=neutron.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
 
</nowiki></pre>
 
</nowiki></pre>
  
* Edit the etc/quantum/plugins/cisco/cisco_plugins.ini file. When not using Nexus hardware use the following dummy configuration verbatim:
+
* Edit the etc/neutron/plugins/cisco/cisco_plugins.ini file. When not using Nexus hardware use the following dummy configuration verbatim:
  
 
<pre><nowiki>
 
<pre><nowiki>
Line 195: Line 296:
 
ssh_port=22
 
ssh_port=22
  
[CISCO]
+
[cisco]
nexus_driver=quantum.plugins.cisco.test.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver
+
nexus_driver=neutron.plugins.cisco.test.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver
 
</nowiki></pre>
 
</nowiki></pre>
  
Line 204: Line 305:
  
 
https://wiki.openstack.org/wiki/Cisco-quantum
 
https://wiki.openstack.org/wiki/Cisco-quantum
 +
 +
[[Category: Neutron]]

Latest revision as of 16:49, 11 February 2015

A Cisco Plugin Framework for Neutron L2 Network Overlays Spanning Multiple Physical Switches (Havana Release)

Introduction

This plugin implementation provides the following capabilities:

  • A reference implementation for a Neutron Plugin Framework (For details see: http://wiki.openstack.org/quantum-multi-switch-plugin)
  • Supports multiple switches in the network
  • Supports multiple models of switches concurrently
  • Supports use of multiple L2 technologies
  • Supports the Cisco Nexus family of switches (Verified with Nexus 3000, 5000, 7000, and 9000 series)

Overlay Architecture

The Cisco plugin overlay architecture uses model layers to overlay the Nexus plugin on top of the Openvswitch plugin. It supports two segmentation methods for the Openvswitch plugin: VLAN and GRE tunnels.


Prerequisites

(The following are necessary only when using the Nexus devices in your system. If you plan to just leverage the plugin framework, you do not need these.)

If you are using a Nexus switch in your topology, you'll need the following NX-OS version and packages to enable Nexus support:

  • NX-OS 5.2.1 (Delhi) Build 69 or above.
  • paramiko library - SSHv2 protocol library for python
  • ncclient v0.3.1 - Python library for NETCONF clients
    • You need a version of ncclient modified by Cisco Systems. To get it, from your shell prompt do:
git clone git@github.com:CiscoSystems/ncclient.git
sudo python ./setup.py install
  • For more information of ncclient, see: http://schmizz.net/ncclient/
  • OS supported:
  • RHEL 6.1 or above
  • Ubuntu 11.10 or above
  • Package: python-configobj-4.6.0-3.el6.noarch (or newer)
  • Package: python-routes-1.12.3-2.el6.noarch (or newer)
  • Package: pip install mysql-python


Module Structure

  • neutron/plugins/cisco/ - Contains the Network Plugin Framework
    • /client - CLI module for core and extensions API
    • /common - Modules common to the entire plugin
    • /conf - All configuration files
    • /db - Persistence framework
    • /models - Class(es) which tie the logical abstractions to the physical topology
    • /nexus - Nexus-specific modules
    • /test/nexus - A fake Nexus driver for testing the plugin


Basic Plugin Configuration

1. Make a backup copy of /etc/neutron/neutron.conf

2. Edit /etc/neutron.conf and edit the "core_plugin" for v2 API. Also verify/add keystone information.

core_plugin = neutron.plugins.cisco.network_plugin.PluginV2

[keystone_authtoken]
auth_host = <authorization host's IP address>
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = <keystone admin name>
admin_password = <keystone admin password>

3. MySQL database setup:

  • 3a. Create neutron_l2network database in mysql with the following command -
mysql -u<mysqlusername> -p<mysqlpassword> -e "create database neutron_l2network"
  • 3b. Enter the neutron_l2network database configuration info in the [DATABASE] section of the /etc/neutron/plugins/cisco/cisco_plugins.ini file.


4. Configure the model layer to use Openvswitch as the vswitch plugin:

  • Update the "vswitch_plugin" value of the [cisco_plugins] section of /etc/neutron/plugins/cisco/cisco_plugins.ini:
vswitch_plugin=neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2

Cisco Plugin Overlay in Openvswitch GRE Tunnel Mode

In this mode the Nexus switch doesn't configure anything and acts as a simple passthrough.

  • Configure the OVS plugin with the following settings in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:
[ovs]
tenant_network_type = gre
enable_tunneling = True
tunnel_id_ranges = 1:1000
local_ip = 172.29.74.73
  • Modify the [cisco] section of the /etc/neutron/plugins/cisco/cisco_plugins.ini to add model class and the fake nexus driver:
[cisco]
model_class=neutron.plugins.cisco.models.virt_phy_sw_v2.VirtualPhysicalSwitchModelV2
nexus_driver=neutron.plugins.cisco.test.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver

When you start neutron-server, make sure to include --config-file arguments for both files:

/usr/bin/python /usr/bin/neutron-server --config-file /etc/neutron/neutron.conf --log-file /var/log/neutron/server.log --config-file /etc/neutron/plugins/cisco/cisco_plugins.ini --config-file/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

Alternately, you can include the [ovs] section directly in /etc/neutron/plugins/cisco/cisco_plugins.ini:

[cisco]
model_class=neutron.plugins.cisco.models.virt_phy_sw_v2.VirtualPhysicalSwitchModelV2
nexus_driver=neutron.plugins.cisco.test.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver

[ovs]
tenant_network_type = gre
enable_tunneling = True
tunnel_id_ranges = 1:1000
local_ip = 172.29.74.73

Then start neutron-server with one less --config-file argument:

/usr/bin/python /usr/bin/neutron-server --config-file /etc/neutron/neutron.conf --log-file /var/log/neutron/server.log --config-file /etc/neutron/plugins/cisco/cisco_plugins.ini

Cisco Plugin Overlay in Openvswitch VLAN Mode

  • Configure the OVS plugin with the following settings in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:
[ovs]
bridge_mappings = physnet1:br-eth1
network_vlan_ranges = physnet1:1000:1100
tenant_network_type = vlan


  • Configure the [cisco_plugins] of /etc/neutron/plugins/cisco/cisco_plugins.ini:
[cisco_plugins]
#nexus_plugin=neutron.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
vswitch_plugin=neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2


  • Configure the Nexus switch information in /etc/neutron/plugins/cisco/cisco_plugins.ini. The format should include the IP address of the switch, a host that's connected to the switch and the port on the switch that host is connected to. Also, add the Nexus switch credential username and password. You can configure multiple switches as well as multiple hosts per switch as shown in the example below:
[NEXUS_SWITCH:1.1.1.1]
# Hostname and port used of the node
compute-1=1/1
# Hostname and port used of the node
compute-2=1/2
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
ssh_port=22
# Provide the Nexus credentials, if you are using Nexus switches. If not this will be ignored.
username=admin
password=mySecretPasswordForNexus

[NEXUS_SWITCH:2.2.2.2]
# Hostname and port used of the node
compute-3=1/15
# Hostname and port used of the node
compute-4=1/16
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
ssh_port=22
# Provide the Nexus credentials, if you are using Nexus switches. If not this will be ignored.
username=admin
password=mySecretPasswordForNexus


  • Make sure that SSH host key of all Nexus switches is known to the host on which you are running the Neutron service. You can do this simply by logging in to your Neutron host as the user that Neutron runs as and SSHing to the switches at least once. If the host key changes (e.g. due to replacement of the supervisor or clearing of the SSH config on the switch), you may need to repeat this step and remove the old hostkeys from ~/.ssh/known_hosts.
  • In general, make sure that every Nexus switch used in your system, has a credential entry in the above file. This is required for the system to be able to communicate with those switches.
  • Start the Neutron service. If something doesn't work, verify the configuration of each of the above files. When you start neutron-server, make sure to include --config-file arguments for both files:
/usr/bin/python /usr/bin/neutron-server --config-file /etc/neutron/neutron.conf --log-file /var/log/neutron/server.log --config-file /etc/neutron/plugins/cisco/cisco_plugins.ini --config-file/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

Alternately, you can include the [ovs] section directly in /etc/neutron/plugins/cisco/cisco_plugins.ini:

[ovs]
bridge_mappings = physnet1:br-eth1
network_vlan_ranges = physnet1:1000:1100
tenant_network_type = vlan

[cisco_plugins]
#nexus_plugin=neutron.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
vswitch_plugin=neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2

[NEXUS_SWITCH:1.1.1.1]
# Hostname and port used of the node
compute-1=1/1
# Hostname and port used of the node
compute-2=1/2
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
ssh_port=22
# Provide the Nexus credentials, if you are using Nexus switches. If not this will be ignored.
username=admin
password=mySecretPasswordForNexus

[NEXUS_SWITCH:2.2.2.2]
# Hostname and port used of the node
compute-3=1/15
# Hostname and port used of the node
compute-4=1/16
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
ssh_port=22
# Provide the Nexus credentials, if you are using Nexus switches. If not this will be ignored.
username=admin
password=mySecretPasswordForNexus

Then start neutron-server with one less --config-file argument:

/usr/bin/python /usr/bin/neutron-server --config-file /etc/neutron/neutron.conf --log-file /var/log/neutron/server.log --config-file /etc/neutron/plugins/cisco/cisco_plugins.ini

Cisco Plugin vPC (Virtual Port Channel) mode

The Cisco plugin supports multi homes hosts in a vPC setup. A typical vPC setup is illustrated in the following diagram:

Multi Homed vPC hardware configuration

Prerequisites

  • The Cisco plugin will not setup vPC interconnect channels between switches. This needs to be manually performed according to this document: NXOS vPC configuration
  • The data interfaces on the host must be bonded and this bonded interface should be attached to the external bridge.

Plugin Configuration

  • To configure vPC in the plugin, you need to inform the plugin of multiple connections per host. e.g. If host 1 is connected to two nexus switches 1.1.1.1 and 2.2.2.2 over portchannel2; your config will be:
[NEXUS_SWITCH:1.1.1.1]
# Hostname and port used of the node
host1=portchannel:2
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
ssh_port=22
# Provide the Nexus credentials, if you are using Nexus switches. If not this will be ignored.
username=admin
password=mySecretPasswordForNexus

[NEXUS_SWITCH:2.2.2.2]
# Hostname and port used of the node
host1=portchannel:2
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
ssh_port=22
# Provide the Nexus credentials, if you are using Nexus switches. If not this will be ignored.
username=admin
password=mySecretPasswordForNexus
  • The etherytype (portchannel, etherchannel etc.) needs to be specified for a vPC setup otherwise the plugin will assume an ethertype of Ethernet.
  • Non-vpc setups are not affected by this feature, there is no configuration change.

How to Test the Installation

The unit tests are located at neutron/tests/unit/cisco/. They can be executed from the top level Neutron directory using tox (
[sudo] pip install pip testrepository
)

1. Testing the core API (without UCS/Nexus/RHEL device sub-plugins configured):

  • By default all the device sub-plugins are disabled (commented out) in etc/neutron/plugins/cisco/cisco_plugins.ini
   tox -e py27 -- neutron.tests.unit.cisco.test_network_plugin
   tox -e py27 -- neutron.tests.unit.cisco.test_nexus_plugin


2. For testing the Nexus device sub-plugin perform the following configuration:

  • Edit etc/neutron/plugins/cisco/cisco_plugins.ini to add: In the [CISCO_PLUGINS] section add:
nexus_plugin=neutron.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
  • Edit the etc/neutron/plugins/cisco/cisco_plugins.ini file. When not using Nexus hardware use the following dummy configuration verbatim:
[NEXUS_SWITCH:1.1.1.1]
# Hostname and port used of the node
compute-1=1/1
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
ssh_port=22

[cisco]
nexus_driver=neutron.plugins.cisco.test.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver


Older Releases

Information for Grizzly and older releases can be found here:

https://wiki.openstack.org/wiki/Cisco-quantum