Jump to: navigation, search

Difference between revisions of "Cisco-quantum"

(A Cisco Plugin Framework for Quantum L2 Network Overlays Spanning Multiple Physical Switches (Havana Release))
 
(55 intermediate revisions by 7 users not shown)
Line 1: Line 1:
__NOTOC__
+
= A Cisco Plugin Framework for Quantum L2 Network Overlays Spanning Multiple Physical Switches (Grizzly Release) =
= A Cisco Plugin Framework for Quantum Supporting L2 Networks Spannning Multiple Switches =
+
README for Quantum v2.0 Cisco plugin
 
 
=========================================================================================
 
README for Quantum v2.0:
 
A Plugin Framework for Supporting Quantum Networks Spannning Multiple Switches
 
=========================================================================================
 
 
 
Introduction
 
------------
 
  
 +
== Introduction ==
 
This plugin implementation provides the following capabilities:
 
This plugin implementation provides the following capabilities:
  
* A reference implementation for a Quantum Plugin Framework
+
* A reference implementation for a Quantum Plugin Framework (For details see: http://wiki.openstack.org/quantum-multi-switch-plugin)
(For details see: http://wiki.openstack.org/quantum-multi-switch-plugin)
 
 
* Supports multiple switches in the network
 
* Supports multiple switches in the network
 
* Supports multiple models of switches concurrently
 
* Supports multiple models of switches concurrently
 
* Supports use of multiple L2 technologies
 
* Supports use of multiple L2 technologies
* Supports the Cisco Nexus family of switches.
+
* Supports the Cisco Nexus family of switches (Verified with Nexus 3000, 5000 and 7000 series)
* Supports Cisco UCS blade servers with M81KR Virtual Interface Cards
+
 
  (aka "Palo adapters") via 802.1Qbh.
+
== Overlay Architecture ==
 +
The Cisco plugin overlay architecture uses model layers to overlay the Nexus plugin on top of the Openvswitch plugin. It supports two segmentation methods for the Openvswitch plugin: VLAN and GRE tunnels.
 +
 
 +
== Pre-requisites ==
 +
(The following are necessary only when using the Nexus devices in your system. If you plan to just leverage the plugin framework, you do not need these.)
  
Pre-requisites
+
If you are using a Nexus switch in your topology, you'll need the following NX-OS version and packages to enable Nexus support:
--------------
 
(The following are necessary only when using the UCS and/or Nexus devices in your system.
 
If you plan to just leverage the plugin framework, you do not need these.)
 
  
If you are using a Nexus switch in your topology, you'll need the following
 
NX-OS version and packages to enable Nexus support:
 
 
* NX-OS 5.2.1 (Delhi) Build 69 or above.
 
* NX-OS 5.2.1 (Delhi) Build 69 or above.
 
* paramiko library - SSHv2 protocol library for python
 
* paramiko library - SSHv2 protocol library for python
 
* ncclient v0.3.1 - Python library for NETCONF clients
 
* ncclient v0.3.1 - Python library for NETCONF clients
** * You need a version of ncclient modifed by Cisco Systems.
+
** You need a version of ncclient modified by Cisco Systems. To get it, from your shell prompt do:
    To get it, from your shell prompt do:
+
 +
<pre><nowiki>
 +
git clone git@github.com:CiscoSystems/ncclient.git
 +
sudo python ./setup.py install
 +
</nowiki></pre>
 +
 
 +
* For more information of ncclient, see: http://schmizz.net/ncclient/
 +
* OS supported:
 +
* RHEL 6.1 or above
 +
* Ubuntu 11.10 or above
 +
* Package: python-configobj-4.6.0-3.el6.noarch (or newer)
 +
* Package: python-routes-1.12.3-2.el6.noarch (or newer)
 +
* Package: pip install mysql-python
 +
 
 +
== Module Structure ==
 +
* quantum/plugins/cisco/      - Contains the Network Plugin Framework
 +
** /client - CLI module for core and extensions API
 +
** /common - Modules common to the entire plugin
 +
** /conf  - All configuration files
 +
** /db    - Persistence framework
 +
** /models - Class(es) which tie the logical abstractions to the physical topology
 +
** /nexus  - Nexus-specific modules
 +
** /tests  - Tests specific to this plugin
 +
 
 +
== Basic Plugin configuration ==
 +
1.  Make a backup copy of /etc/quantum/quantum.conf
 +
 
 +
2.  Edit /etc/quantum.conf and edit the "core_plugin" for v2 API
 +
 
 +
 
 +
<pre><nowiki>
 +
core_plugin = quantum.plugins.cisco.network_plugin.PluginV2
 +
</nowiki></pre>
 +
 
 +
3.  MySQL database setup:
 +
 
 +
* 3a.  Create quantum_l2network database in mysql with the following command -
  
    git clone git@github.com:[[CiscoSystems]]/ncclient.git
 
    sudo python ./setup.py install
 
  
* * For more information of ncclient, see:
+
<pre><nowiki>
    http://schmizz.net/ncclient/
+
mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"
 +
</nowiki></pre>
  
* One or more UCS B200 series blade servers with M81KR VIC (aka
+
* 3b. Enter the quantum_l2network database configuration info in the
  Palo adapters) installed.
+
** /etc/quantum/plugins/cisco/db_conn.ini file.
* UCSM 2.0 (Capitola) Build 230 or above.
+
 
* OS supported:
+
4. Configure the model layer to use Openvswitch as the vswitch plugin:
** * RHEL 6.1 or above
+
 
** * Ubuntu 11.10 or above
+
* Edit the [PLUGINS] section of /etc/quantum/plugins/cisco/cisco_plugins.ini to say:
** * Package: python-configobj-4.6.0-3.el6.noarch (or newer)
+
** * Package: python-routes-1.12.3-2.el6.noarch (or newer)
+
<pre><nowiki>
** * Package: pip install mysql-python
+
vswitch_plugin=quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
 +
</nowiki></pre>
 +
 
 +
== Cisco plugin overlay in Openvswitch GRE tunnel mode ==
 +
In this mode the Nexus switch doesn't configure anything and acts as a simple passthrough.
 +
 
 +
* Configure the OVS plugin with the following settings in /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini:
 +
 
 +
 
 +
<pre><nowiki>
 +
[OVS]
 +
tenant_network_type = gre
 +
enable_tunneling = True
 +
tunnel_id_ranges = 1:1000
 +
local_ip = 172.29.74.73
 +
</nowiki></pre>
 +
 
 +
* Configure the Cisco plugin with the following settings (/etc/quantum/plugins/cisco/l2network_plugin.ini):
 +
 
 +
 
 +
<pre><nowiki>
 +
[MODEL]
 +
model_class=quantum.plugins.cisco.models.virt_phy_sw_v2.VirtualPhysicalSwitchModelV2
 +
</nowiki></pre>
 +
 
 +
* Modify the [DRIVER] section of the /etc/quantum/plugins/cisco/nexus.ini to add the fake nexus  driver:
 +
 
 +
 
 +
<pre><nowiki>
 +
[DRIVER]
 +
name=quantum.plugins.cisco.tests.unit.v2.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver
 +
</nowiki></pre>
 +
 
 +
== Cisco plugin overlay in Openvswitch VLAN mode ==
 +
* Configure the OVS plugin with the following settings in /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini:
 +
 
 +
 
 +
<pre><nowiki>
 +
[OVS]
 +
bridge_mappings = physnet1:br-eth1
 +
network_vlan_ranges = physnet1:1000:1100
 +
tenant_network_type = vlan
 +
</nowiki></pre>
 +
 
 +
* Configure the [PLUGINS] of '''/etc/quantum/plugins/cisco/cisco_plugins.ini:'''
 +
 
 +
 
 +
<pre><nowiki>
 +
[PLUGINS]
 +
nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
 +
vswitch_plugin=quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
 +
</nowiki></pre>
 +
 
 +
* Configure the Nexus switch information in /etc/quantum/plugins/cisco/nexus.ini. The format should include the IP address of the switch, a host that's connected to the switch and the port on the switch that host is connected to. You can configure multiple switches as well as multiple hosts per switch as shown in the example below:
 +
 
 +
 
 +
<pre><nowiki>
 +
[SWITCH]
 +
# Ip address of the switch
 +
[[1.1.1.1]]
 +
# Hostname of the node
 +
[[[compute-1]]]
 +
# Port this node is connected to on the nexus switch
 +
ports=1/1
 +
# Hostname of the node
 +
[[[compute-2]]]
 +
# Port this node is connected to on the nexus switch
 +
ports=1/2
 +
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
 +
[[[ssh_port]]]
 +
ssh_port=22
 +
 
 +
[[2.2.2.2]]
 +
# Hostname of the node
 +
[[[compute-3]]]
 +
# Port this node is connected to on the nexus switch
 +
ports=1/15
 +
# Hostname of the node
 +
[[[compute-4]]]
 +
# Port this node is connected to on the nexus switch
 +
ports=1/16
 +
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
 +
[[[ssh_port]]]
 +
ssh_port=22
 +
[DRIVER]
 +
name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver_v2.CiscoNEXUSDriver
 +
</nowiki></pre>
 +
 
 +
* 4c.  Make sure that SSH host key of all Nexus switches is known to the
 +
** host on which you are running the Quantum service.  You can do this simply by logging in to your Quantum host as the user that Quantum runs as and SSHing to the switches at least once.  If the host key changes (e.g. due to replacement of the supervisor or clearing of the SSH config on the switch), you may need to repeat this step and remove the old hostkeys from ~/.ssh/known_hosts.
 +
 
 +
7.  Verify that you have the correct credentials for each IP address listed
 +
 
 +
* in quantum/plugins/cisco/credentials.ini.  Example:
 +
 
 +
 
 +
<pre><nowiki>
 +
# Provide the Nexus credentials, if you are using Nexus switches.
 +
# If not this will be ignored.
 +
[1.1.1.1]
 +
username=admin
 +
password=mySecretPasswordForNexus
 +
 
 +
[2.2.2.2]
 +
username=admin
 +
password=mySecretPasswordForNexus
 +
</nowiki></pre>
 +
 
 +
* In general, make sure that every Nexus switch  used in your system, has a credential entry in the above file. This is required for the system to be able to communicate with those switches.
 +
 
 +
9.  Start the Quantum service.  If something doesn't work, verify the
 +
 
 +
* your configuration of each of the above files.
 +
 
 +
== How to test the installation ==
 +
The unit tests are located at quantum/plugins/cisco/tests/unit/v2. They can be executed from the top level Quantum directory using the run_tests.sh script.
 +
 
 +
1. Testing the core API (without UCS/Nexus/RHEL device sub-plugins configured):
 +
 
 +
* By default all the device sub-plugins are disabled (commented out) in etc/quantum/plugins/cisco/cisco_plugins.ini
 +
 
 +
 
 +
<pre><nowiki>
 +
  ./run_tests.sh quantum.plugins.cisco.tests.unit.v2.test_api_v2
 +
  ./run_tests.sh quantum.plugins.cisco.tests.unit.v2.test_network_plugin
 +
</nowiki></pre>
 +
 
 +
2. For testing the Nexus device sub-plugin perform the following configuration:
 +
 
 +
* Edit etc/quantum/plugins/cisco/cisco_plugins.ini to add: In the [PLUGINS] section add:
 +
 
 +
 
 +
<pre><nowiki>
 +
nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
 +
</nowiki></pre>
 +
 
 +
* Edit the etc/quantum/plugins/cisco/nexus.ini file. When not using Nexus hardware use the following dummy configuration verbatim:
 +
 
 +
 
 +
<pre><nowiki>
 +
[SWITCH]
 +
[[1.1.1.1]]
 +
# Hostname of the node
 +
[[[compute-1]]]
 +
# Port this node is connected to on the nexus switch
 +
ports=1/1
 +
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
 +
[[[ssh_port]]]
 +
ssh_port=22
 +
 
 +
[DRIVER]
 +
name=quantum.plugins.cisco.tests.unit.v2.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver
 +
</nowiki></pre>
 +
 
 +
 
 +
= A Cisco Plugin Framework for Quantum L2 Network Overlays Spanning Multiple Physical Switches (Havana Release and newer) =
 +
 
 +
In the Havana release, Quantum has been renamed to "Neutron".  Information on Havana and newer releases may be found here:
 +
 
 +
https://wiki.openstack.org/wiki/Cisco-neutron
 +
 
 +
= Pre-grizzly support information =
 +
The Cisco UCS plugin has been deprecated in the grizzly release (to be bought back in a later release) and support for intelligent multiple switch configuration has been  added. If you are using the any release before grizzly then the following information would be of relevance.
  
Module Structure:
+
== Module Structure ==
-----------------
 
 
* quantum/plugins/cisco/      - Contains the Network Plugin Framework
 
* quantum/plugins/cisco/      - Contains the Network Plugin Framework
                      /client - CLI module for core and extensions API
+
** /client - CLI module for core and extensions API
                      /common - Modules common to the entire plugin
+
** /common - Modules common to the entire plugin
                      /conf  - All configuration files
+
** /conf  - All configuration files
                      /db    - Persistence framework
+
** /db    - Persistence framework
                      /models - Class(es) which tie the logical abstractions
+
** /models - Class(es) which tie the logical abstractions to the physical topology
                                to the physical topology
+
** /nova  - Scheduler and VIF-driver to be used by Nova
                      /nova  - Scheduler and VIF-driver to be used by Nova
+
** /nexus  - Nexus-specific modules
                      /nexus  - Nexus-specific modules
+
** /segmentation - Implementation of segmentation manager, e.g. VLAN Manager
                      /segmentation - Implementation of segmentation manager,
+
** /services - Set of orchestration libraries to insert In-path Networking Services
                                      e.g. VLAN Manager
+
** /ucs    - UCS-specific modules
                      /services - Set of orchestration libraries to insert
 
                                  In-path Networking Services
 
                      /tests  - Tests specific to this plugin
 
                      /ucs    - UCS-specific modules
 
  
Plugin Installation Instructions
+
== Plugin Installation Instructions ==
----------------------------------
 
 
1.  Make a backup copy of quantum/etc/quantum.conf
 
1.  Make a backup copy of quantum/etc/quantum.conf
  
 
2.  Edit quantum/etc/quantum.conf and edit the "core_plugin" for v2 API
 
2.  Edit quantum/etc/quantum.conf and edit the "core_plugin" for v2 API
  
core_plugin = quantum.plugins.cisco.network_plugin.[[PluginV2]]
+
`core_plugin = quantum.plugins.cisco.network_plugin.[[PluginV2]]`
  
 
3.  MySQL database setup:
 
3.  MySQL database setup:
    3a.  Create quantum_l2network database in mysql with the following command -
 
  
mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"
+
* 3a.  Create quantum_l2network database in mysql with the following command -
 +
 
 +
`mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"`
  
    3b.  Enter the quantum_l2network database configuration info in the
+
* 3b.  Enter the quantum_l2network database configuration info in the
        quantum/plugins/cisco/conf/db_conn.ini file.
+
** quantum/plugins/cisco/db_conn.ini file.
  
 
4.  If you want to turn on support for Cisco Nexus switches:
 
4.  If you want to turn on support for Cisco Nexus switches:
    4a.  Uncomment the nexus_plugin property in
 
        etc/quantum/plugins/cisco/cisco_plugins.ini to read:
 
  
 +
* 4a.  Uncomment the nexus_plugin property in
 +
** etc/quantum/plugins/cisco/cisco_plugins.ini to read:
 +
 +
 +
<pre><nowiki>
 
[PLUGINS]
 
[PLUGINS]
nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.[[NexusPlugin]]
+
nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
 +
</nowiki></pre>
  
    4b.  Enter the relevant configuration in the
+
* 4b.  Enter the relevant configuration in the
        etc/quantum/plugins/cisco/nexus.ini file.  Example:
+
** etc/quantum/plugins/cisco/nexus.ini file.  Example:
  
 +
 +
<pre><nowiki>
 
[SWITCH]
 
[SWITCH]
<!-- # Change the following to reflect the IP address of the Nexus switch. -->
+
# Change the following to reflect the IP address of the Nexus switch.
<!-- # This will be the address at which Quantum sends and receives configuration -->
+
# This will be the address at which Quantum sends and receives configuration
<!-- # information via SSHv2. -->
+
# information via SSHv2.
 
nexus_ip_address=10.0.0.1
 
nexus_ip_address=10.0.0.1
<!-- # Port numbers on the Nexus switch to each one of the compute nodes are connected -->
+
# Port numbers on the Nexus switch to each one of the compute nodes are connected
<!-- # Use shortened interface syntax, e.g. "1/10" not "Ethernet1/10" and "," between ports. -->
+
# Use shortened interface syntax, e.g. "1/10" not "Ethernet1/10" and "," between ports.
 
ports=1/10,1/11,1/12
 
ports=1/10,1/11,1/12
<!-- #Port number where SSH will be running on the Nexus switch.  Typically this is 22 -->
+
#Port number where SSH will be running on the Nexus switch.  Typically this is 22
<!-- #unless you've configured your switch otherwise. -->
+
#unless you've configured your switch otherwise.
 
nexus_ssh_port=22
 
nexus_ssh_port=22
  
 
[DRIVER]
 
[DRIVER]
 
name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver
 
name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver
 +
</nowiki></pre>
  
    4c.  Make sure that SSH host key of the Nexus switch is known to the
+
* 4c.  Make sure that SSH host key of the Nexus switch is known to the
        host on which you are running the Quantum service.  You can do
+
** host on which you are running the Quantum service.  You can do this simply by logging in to your Quantum host as the user that Quantum runs as and SSHing to the switch at least once.  If the host key changes (e.g. due to replacement of the supervisor or clearing of the SSH config on the switch), you may need to repeat this step and remove the old hostkey from ~/.ssh/known_hosts.
        this simply by logging in to your Quantum host as the user that
 
        Quantum runs as and SSHing to the switch at least once.  If the
 
        host key changes (e.g. due to replacement of the supervisor or
 
        clearing of the SSH config on the switch), you may need to repeat
 
        this step and remove the old hostkey from ~/.ssh/known_hosts.
 
  
 
5.  If your are using UCS blade servers with M81KR Virtual Interface Cards and
 
5.  If your are using UCS blade servers with M81KR Virtual Interface Cards and
    want to leverage the VM-FEX features,
 
  
    5a.  Uncomment the ucs_plugin propertes in
+
* want to leverage the VM-FEX features, 5a.  Uncomment the ucs_plugin propertes in
        etc/quantum/plugins/cisco/cisco_plugins.ini to read:
+
** etc/quantum/plugins/cisco/cisco_plugins.ini to read:
  
 +
 +
<pre><nowiki>
 
[PLUGINS]
 
[PLUGINS]
 
ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_plugin_v2.UCSVICPlugin
 
ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_plugin_v2.UCSVICPlugin
Line 132: Line 318:
  
 
[UCSM]
 
[UCSM]
<!-- #change the following to the appropriate UCSM IP address -->
+
#change the following to the appropriate UCSM IP address
<!-- #if you have more than one UCSM, enter info from any one -->
+
#if you have more than one UCSM, enter info from any one
 
ip_address=<put_ucsm_ip_address_here>
 
ip_address=<put_ucsm_ip_address_here>
 
default_vlan_name=default
 
default_vlan_name=default
Line 142: Line 328:
 
[DRIVER]
 
[DRIVER]
 
name=quantum.plugins.cisco.ucs.cisco_ucs_network_driver.CiscoUCSMDriver
 
name=quantum.plugins.cisco.ucs.cisco_ucs_network_driver.CiscoUCSMDriver
 +
</nowiki></pre>
 +
 +
* 5c.  Configure the UCS systems' information in your deployment by editing the
 +
** quantum/plugins/cisco/conf/ucs_inventory.ini file. You can configure multiple UCSMs per deployment, multiple chassis per UCSM, and multiple blades per chassis. Chassis ID and blade ID can be obtained from the UCSM (they will typically be numbers like 1, 2, 3, etc.). Also make sure that you put the exact hostname as nova sees it (the host column in the services table of the nova DB will give you that information).
  
    5c.  Configure the UCS systems' information in your deployment by editing the
 
        quantum/plugins/cisco/conf/ucs_inventory.ini file. You can configure multiple
 
        UCSMs per deployment, multiple chassis per UCSM, and multiple blades per
 
        chassis. Chassis ID and blade ID can be obtained from the UCSM (they will
 
        typically be numbers like 1, 2, 3, etc.). Also make sure that you put the exact
 
        hostname as nova sees it (the host column in the services table of the nova
 
        DB will give you that information).
 
  
 +
<pre><nowiki>
 
[ucsm-1]
 
[ucsm-1]
 
ip_address = <put_ucsm_ip_address_here>
 
ip_address = <put_ucsm_ip_address_here>
Line 175: Line 359:
 
blade_id = <put_blade_id_here>
 
blade_id = <put_blade_id_here>
 
host_name = <put_hostname_here>
 
host_name = <put_hostname_here>
 +
</nowiki></pre>
 +
 +
* 5d. Configure your [http://wiki.openstack.org/OpenStack OpenStack] installation to use the 802.1qbh VIF driver and
 +
** Quantum-aware scheduler by editing the /etc/nova/nova.conf file with the following entries:
  
    5d. Configure your [[OpenStack]] installation to use the 802.1qbh VIF driver and
 
        Quantum-aware scheduler by editing the /etc/nova/nova.conf file with the
 
        following entries:
 
  
scheduler_driver=quantum.plugins.cisco.nova.quantum_port_aware_scheduler.[[QuantumPortAwareScheduler]]
+
<pre><nowiki>
 +
scheduler_driver=quantum.plugins.cisco.nova.quantum_port_aware_scheduler.QuantumPortAwareScheduler
 
quantum_host=127.0.0.1
 
quantum_host=127.0.0.1
 
quantum_port=9696
 
quantum_port=9696
libvirt_vif_driver=quantum.plugins.cisco.nova.vifdirect.[[Libvirt802dot1QbhDriver]]
+
libvirt_vif_driver=quantum.plugins.cisco.nova.vifdirect.Libvirt802dot1QbhDriver
 
libvirt_vif_type=802.1Qbh
 
libvirt_vif_type=802.1Qbh
 +
</nowiki></pre>
  
    Note: To be able to bring up a VM on a UCS blade, you should first create a
+
* Note: To be able to bring up a VM on a UCS blade, you should first create a
          port for that VM using the Quantum create port API. VM creation will
+
** port for that VM using the Quantum create port API. VM creation will fail if an unused port is not available. If you have configured your Nova project with more than one network, Nova will attempt to instantiate the VM with one network interface (VIF) per configured network. To provide plugin points for each of these VIFs, you will need to create multiple Quantum ports, one for each of the networks, prior to starting the VM. However, in this case you will need to use the Cisco multiport extension API instead of the Quantum create port API. More details on using the multiport extension follow in the section on multi NIC support.
          fail if an unused port is not available. If you have configured your
+
To support the above configuration, you will need some Quantum modules. It's easiest to copy the entire quantum directory from your quantum installation into: /usr/lib/python2.7/site-packages/ This needs to be done on each nova compute node.
          Nova project with more than one network, Nova will attempt to instantiate
 
          the VM with one network interface (VIF) per configured network. To provide
 
          plugin points for each of these VIFs, you will need to create multiple
 
          Quantum ports, one for each of the networks, prior to starting the VM.
 
          However, in this case you will need to use the Cisco multiport extension
 
          API instead of the Quantum create port API. More details on using the
 
          multiport extension follow in the section on multi NIC support.
 
  
    To support the above configuration, you will need some Quantum modules. It's easiest
+
7.  Verify that you have the correct credentials for each IP address listed
    to copy the entire quantum directory from your quantum installation into:
 
 
 
    /usr/lib/python2.7/site-packages/
 
  
    This needs to be done on each nova compute node.
+
* in quantum/plugins/cisco/conf/credentials.ini.  Example:
  
7.  Verify that you have the correct credentials for each IP address listed
 
    in quantum/plugins/cisco/conf/credentials.ini.  Example:
 
  
<!-- # Provide the UCSM credentials, create a separte entry for each UCSM used in your system -->
+
<pre><nowiki>
<!-- # UCSM IP address, username and password. -->
+
# Provide the UCSM credentials, create a separte entry for each UCSM used in your system
 +
# UCSM IP address, username and password.
 
[10.0.0.2]
 
[10.0.0.2]
 
username=admin
 
username=admin
 
password=mySecretPasswordForUCSM
 
password=mySecretPasswordForUCSM
  
<!-- # Provide the Nexus credentials, if you are using Nexus switches. -->
+
# Provide the Nexus credentials, if you are using Nexus switches.
<!-- # If not this will be ignored. -->
+
# If not this will be ignored.
 
[10.0.0.1]
 
[10.0.0.1]
 
username=admin
 
username=admin
 
password=mySecretPasswordForNexus
 
password=mySecretPasswordForNexus
 +
</nowiki></pre>
  
    In general, make sure that every UCSM and Nexus switch  used in your system,
+
* In general, make sure that every UCSM and Nexus switch  used in your system, has a credential entry in the above file. This is required for the system to be able to communicate with those switches.
    has a credential entry in the above file. This is required for the system to
 
    be able to communicate with those switches.
 
  
 
9.  Start the Quantum service.  If something doesn't work, verify the
 
9.  Start the Quantum service.  If something doesn't work, verify the
    your configuration of each of the above files.
+
 
 +
* your configuration of each of the above files.
  
 
Multi NIC support for VMs
 
Multi NIC support for VMs
-------------------------
 
As indicated earlier, if your Nova setup has a project with more than one network,
 
Nova will try to create a virtual network interface (VIF) on the VM for each of those
 
  
As indicated earlier, if your Nova setup has a project with more than one network,
+
----------
Nova will try to create a virtual network interface (VIF) on the VM for each of those
+
As indicated earlier, if your Nova setup has a project with more than one network, Nova will try to create a virtual network interface (VIF) on the VM for each of those
networks. Before each VM is instantiated, you should create Quantum ports on each of
+
 
those networks. These ports need to be created using the following rest call:
+
As indicated earlier, if your Nova setup has a project with more than one network, Nova will try to create a virtual network interface (VIF) on the VM for each of those networks. Before each VM is instantiated, you should create Quantum ports on each of those networks. These ports need to be created using the following rest call:
  
 
POST /1.0/extensions/csco/tenants/{tenant_id}/multiport/
 
POST /1.0/extensions/csco/tenants/{tenant_id}/multiport/
Line 240: Line 413:
 
with request body:
 
with request body:
  
 +
 +
<pre><nowiki>
 
{'multiport':
 
{'multiport':
 
  {'status': 'ACTIVE',
 
  {'status': 'ACTIVE',
 
   'net_id_list': net_id_list,
 
   'net_id_list': net_id_list,
   'ports_desc': {'key': 'value'</nowiki></pre>
+
   'ports_desc': {'key': 'value'
 +
</nowiki></pre>
 +
 
 +
</nowiki></pre>
  
  
 
where,
 
where,
  
net_id_list is a list of network IDs: [netid1, netid2, ...]. The "ports_desc" dictionary
+
net_id_list is a list of network IDs: [netid1, netid2, ...]. The "ports_desc" dictionary is reserved for later use. For now, the same structure in terms of the dictionary name, key and value should be used.
is reserved for later use. For now, the same structure in terms of the dictionary name, key
 
and value should be used.
 
  
 
The corresponding CLI for this operation is as follows:
 
The corresponding CLI for this operation is as follows:
  
PYTHONPATH=. python quantum/plugins/cisco/client/cli.py create_multiport <tenant_id> <net_id1,net_id2,...>
+
`PYTHONPATH=. python quantum/plugins/cisco/client/cli.py create_multiport <tenant_id> <net_id1,net_id2,...>`
  
    (Note that you should not be using the create port core API in the above case.)
+
* (Note that you should not be using the create port core API in the above case.)
  
Using an independent plugin as a device sub-plugin
+
== Using an independent plugin as a device sub-plugin ==
-------------------------------------------------
+
If you would like to use an independent virtual switch plugin as one of the sub-plugins (for eg: the OpenVSwitch plugin) with the nexus device sub-plugin perform the following steps:
  
If you would like to use an independent virtual switch plugin as one of the sub-plugins
+
(The following instructions are with respect to the OpenVSwitch plugin) 1. Update etc/quantum/plugins/cisco/l2network_plugin.ini
(for eg: the OpenVSwitch plugin) with the nexus device sub-plugin perform the following steps:
 
  
(The following instructions are with respect to the OpenVSwitch plugin)
+
* In the [MODEL] section of the configuration file put the following configuration (note that this should be the only configuration in this section, all other configuration should be either removed or commented)
1. Update etc/quantum/plugins/cisco/l2network_plugin.ini
+
** `model_class=quantum.plugins.cisco.models.virt_phy_sw_v2.[[VirtualPhysicalSwitchModelV2]]`
  In the [MODEL] section of the configuration file put the following configuration
 
  (note that this should be the only configuration in this section, all other configuration
 
  should be either removed or commented)
 
 
 
    model_class=quantum.plugins.cisco.models.virt_phy_sw_v2.[[VirtualPhysicalSwitchModelV2]]
 
  
 
2. Update etc/quantum/plugins/cisco/cisco_plugins.ini
 
2. Update etc/quantum/plugins/cisco/cisco_plugins.ini
  In the [PLUGINS] section of the configuration file put the following configuration:
 
  
  vswitch_plugin=quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
+
* In the [PLUGINS] section of the configuration file put the following configuration:
 +
`vswitch_plugin=quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2`
  
 
3. Set the DB name, the same name has to be configured in three places:
 
3. Set the DB name, the same name has to be configured in three places:
  In etc/quantum/plugins/cisco/conf/db_conn.ini set the "name" value
 
  In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini set the "sql_connection"
 
  
  In etc/quantum/plugins/cisco/conf/db_conn.ini set the "name" value
+
* In etc/quantum/plugins/cisco/conf/db_conn.ini set the "name" value In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini set the "sql_connection" In etc/quantum/plugins/cisco/conf/db_conn.ini set the "name" value In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini set the "sql_connection" In /etc/quantum/dhcp_agent.ini set the "db_connection"
  In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini set the "sql_connection"
 
  In /etc/quantum/dhcp_agent.ini set the "db_connection"
 
  
 
4. The range of VLAN IDs has to be set in the OpenVSwitch configuration file:
 
4. The range of VLAN IDs has to be set in the OpenVSwitch configuration file:
  In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
+
 
  Set:
+
* In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini Set:
 +
 
 +
 
 +
<pre><nowiki>
 
   vlan_min = <lower_id>
 
   vlan_min = <lower_id>
 
   vlan_max = <higher_id>
 
   vlan_max = <higher_id>
 
   enable_tunneling = False
 
   enable_tunneling = False
 +
</nowiki></pre>
  
 
5. For Nexus device sub-plugin configuration refer to the above sections
 
5. For Nexus device sub-plugin configuration refer to the above sections
  
How to test the installation
+
== How to test the installation ==
----------------------------
+
The unit tests are located at quantum/tests/unit/cisco/. They can be executed from the top level Quantum directory using tox (<pre>[sudo] pip install pip testrepository</pre>)
The unit tests are located at quantum/plugins/cisco/tests/unit/v2. They can be
 
executed from the top level Quantum directory using the run_tests.sh script.
 
  
 
1. Testing the core API (without UCS/Nexus/RHEL device sub-plugins configured):
 
1. Testing the core API (without UCS/Nexus/RHEL device sub-plugins configured):
  By default all the device sub-plugins are disabled (commented out) in
 
  etc/quantum/plugins/cisco/cisco_plugins.ini
 
  
  ./run_tests.sh quantum.plugins.cisco.tests.unit.v2.test_api_v2
+
* By default all the device sub-plugins are disabled (commented out) in etc/quantum/plugins/cisco/cisco_plugins.ini
   ./run_tests.sh quantum.plugins.cisco.tests.unit.v2.test_network_plugin
+
 
 +
 
 +
<pre><nowiki>
 +
  tox -e py27 -- quantum.tests.unit.cisco.test_network_plugin
 +
   tox -e py27 -- quantum.tests.unit.cisco.test_nexus_plugin
 +
</nowiki></pre>
 +
 
  
 
2. For testing the Nexus device sub-plugin perform the following configuration:
 
2. For testing the Nexus device sub-plugin perform the following configuration:
  
  Edit etc/quantum/plugins/cisco/cisco_plugins.ini to add:
+
* Edit etc/quantum/plugins/cisco/cisco_plugins.ini to add: In the [PLUGINS] section add:
  In the [PLUGINS] section add:
+
 
nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.[[NexusPlugin]]
+
`nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.[[NexusPlugin]]`
 +
 
 +
* Edit the etc/quantum/plugins/cisco/nexus.ini file. When not using Nexus hardware use the following dummy configuration verbatim:
 +
 
  
  Edit the etc/quantum/plugins/cisco/nexus.ini file.
+
<pre><nowiki>
  When not using Nexus hardware use the following dummy configuration verbatim:
 
 
[SWITCH]
 
[SWITCH]
 
nexus_ip_address=1.1.1.1
 
nexus_ip_address=1.1.1.1
Line 319: Line 493:
 
nexus_ssh_port=22
 
nexus_ssh_port=22
 
[DRIVER]
 
[DRIVER]
name=quantum.plugins.cisco.tests.unit.v2.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver
+
name=quantum.plugins.cisco.test.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver
 
   Or when using Nexus hardware (put the values relevant to your setup):
 
   Or when using Nexus hardware (put the values relevant to your setup):
 
[SWITCH]
 
[SWITCH]
Line 327: Line 501:
 
[DRIVER]
 
[DRIVER]
 
name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver
 
name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver
 +
</nowiki></pre>
  
  (Note: Make sure that quantum/plugins/cisco/conf/credentials.ini has an entry for
+
* (Note: Make sure that quantum/plugins/cisco/conf/credentials.ini has an entry for
  
 
3. For testing the UCS device sub-plugin perform the following configuration:
 
3. For testing the UCS device sub-plugin perform the following configuration:
  
  Edit etc/quantum/plugins/cisco/cisco_plugins.ini to add:
+
* Edit etc/quantum/plugins/cisco/cisco_plugins.ini to add: In the [PLUGINS] section add:
  In the [PLUGINS] section add:
+
 
ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_plugin_v2.UCSVICPlugin
+
`ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_plugin_v2.UCSVICPlugin`
 +
 
 +
* In the [INVENTORY] section add: When not using UCS hardware:
 +
 
 +
`ucs_plugin=quantum.plugins.cisco.test.ucs.cisco_ucs_inventory_fake.UCSInventory`
 +
 
 +
* Or when using UCS hardware:
 +
 
 +
`ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_inventory_v2.UCSInventory`
 +
 
 +
* Edit the etc/quantum/plugins/cisco/ucs.ini file. When not using UCS hardware:
  
  In the [INVENTORY] section add:
 
  When not using UCS hardware:
 
ucs_plugin=quantum.plugins.cisco.tests.unit.v2.ucs.cisco_ucs_inventory_fake.UCSInventory
 
  Or when using UCS hardware:
 
ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_inventory_v2.UCSInventory
 
  
  Edit the etc/quantum/plugins/cisco/ucs.ini file.
+
<pre><nowiki>
  When not using UCS hardware:
 
 
[DRIVER]
 
[DRIVER]
name=quantum.plugins.cisco.tests.unit.v2.ucs.fake_ucs_driver.CiscoUCSMFakeDriver
+
name=quantum.plugins.cisco.test.ucs.fake_ucs_driver.CiscoUCSMFakeDriver
 
   Or when using UCS hardware:
 
   Or when using UCS hardware:
 
[DRIVER]
 
[DRIVER]
 
name=quantum.plugins.cisco.ucs.cisco_ucs_network_driver.CiscoUCSMDriver
 
name=quantum.plugins.cisco.ucs.cisco_ucs_network_driver.CiscoUCSMDriver
 +
</nowiki></pre>
  
:Copyright: 2012 Cisco Systems, Inc.
+
:Copyright: 2013 Cisco Systems, Inc.

Latest revision as of 16:24, 12 September 2013

A Cisco Plugin Framework for Quantum L2 Network Overlays Spanning Multiple Physical Switches (Grizzly Release)

README for Quantum v2.0 Cisco plugin

Introduction

This plugin implementation provides the following capabilities:

  • A reference implementation for a Quantum Plugin Framework (For details see: http://wiki.openstack.org/quantum-multi-switch-plugin)
  • Supports multiple switches in the network
  • Supports multiple models of switches concurrently
  • Supports use of multiple L2 technologies
  • Supports the Cisco Nexus family of switches (Verified with Nexus 3000, 5000 and 7000 series)

Overlay Architecture

The Cisco plugin overlay architecture uses model layers to overlay the Nexus plugin on top of the Openvswitch plugin. It supports two segmentation methods for the Openvswitch plugin: VLAN and GRE tunnels.

Pre-requisites

(The following are necessary only when using the Nexus devices in your system. If you plan to just leverage the plugin framework, you do not need these.)

If you are using a Nexus switch in your topology, you'll need the following NX-OS version and packages to enable Nexus support:

  • NX-OS 5.2.1 (Delhi) Build 69 or above.
  • paramiko library - SSHv2 protocol library for python
  • ncclient v0.3.1 - Python library for NETCONF clients
    • You need a version of ncclient modified by Cisco Systems. To get it, from your shell prompt do:
git clone git@github.com:CiscoSystems/ncclient.git
sudo python ./setup.py install
  • For more information of ncclient, see: http://schmizz.net/ncclient/
  • OS supported:
  • RHEL 6.1 or above
  • Ubuntu 11.10 or above
  • Package: python-configobj-4.6.0-3.el6.noarch (or newer)
  • Package: python-routes-1.12.3-2.el6.noarch (or newer)
  • Package: pip install mysql-python

Module Structure

  • quantum/plugins/cisco/ - Contains the Network Plugin Framework
    • /client - CLI module for core and extensions API
    • /common - Modules common to the entire plugin
    • /conf - All configuration files
    • /db - Persistence framework
    • /models - Class(es) which tie the logical abstractions to the physical topology
    • /nexus - Nexus-specific modules
    • /tests - Tests specific to this plugin

Basic Plugin configuration

1. Make a backup copy of /etc/quantum/quantum.conf

2. Edit /etc/quantum.conf and edit the "core_plugin" for v2 API


core_plugin = quantum.plugins.cisco.network_plugin.PluginV2

3. MySQL database setup:

  • 3a. Create quantum_l2network database in mysql with the following command -


mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"
  • 3b. Enter the quantum_l2network database configuration info in the
    • /etc/quantum/plugins/cisco/db_conn.ini file.

4. Configure the model layer to use Openvswitch as the vswitch plugin:

  • Edit the [PLUGINS] section of /etc/quantum/plugins/cisco/cisco_plugins.ini to say:
vswitch_plugin=quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2

Cisco plugin overlay in Openvswitch GRE tunnel mode

In this mode the Nexus switch doesn't configure anything and acts as a simple passthrough.

  • Configure the OVS plugin with the following settings in /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini:


[OVS]
tenant_network_type = gre
enable_tunneling = True
tunnel_id_ranges = 1:1000
local_ip = 172.29.74.73
  • Configure the Cisco plugin with the following settings (/etc/quantum/plugins/cisco/l2network_plugin.ini):


[MODEL]
model_class=quantum.plugins.cisco.models.virt_phy_sw_v2.VirtualPhysicalSwitchModelV2
  • Modify the [DRIVER] section of the /etc/quantum/plugins/cisco/nexus.ini to add the fake nexus driver:


[DRIVER]
name=quantum.plugins.cisco.tests.unit.v2.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver

Cisco plugin overlay in Openvswitch VLAN mode

  • Configure the OVS plugin with the following settings in /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini:


[OVS]
bridge_mappings = physnet1:br-eth1
network_vlan_ranges = physnet1:1000:1100
tenant_network_type = vlan
  • Configure the [PLUGINS] of /etc/quantum/plugins/cisco/cisco_plugins.ini:


[PLUGINS]
nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
vswitch_plugin=quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
  • Configure the Nexus switch information in /etc/quantum/plugins/cisco/nexus.ini. The format should include the IP address of the switch, a host that's connected to the switch and the port on the switch that host is connected to. You can configure multiple switches as well as multiple hosts per switch as shown in the example below:


[SWITCH]
# Ip address of the switch
[[1.1.1.1]]
# Hostname of the node
[[[compute-1]]]
# Port this node is connected to on the nexus switch
ports=1/1
# Hostname of the node
[[[compute-2]]]
# Port this node is connected to on the nexus switch
ports=1/2
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
[[[ssh_port]]]
ssh_port=22

[[2.2.2.2]]
# Hostname of the node
[[[compute-3]]]
# Port this node is connected to on the nexus switch
ports=1/15
# Hostname of the node
[[[compute-4]]]
# Port this node is connected to on the nexus switch
ports=1/16
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
[[[ssh_port]]]
ssh_port=22
[DRIVER]
name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver_v2.CiscoNEXUSDriver
  • 4c. Make sure that SSH host key of all Nexus switches is known to the
    • host on which you are running the Quantum service. You can do this simply by logging in to your Quantum host as the user that Quantum runs as and SSHing to the switches at least once. If the host key changes (e.g. due to replacement of the supervisor or clearing of the SSH config on the switch), you may need to repeat this step and remove the old hostkeys from ~/.ssh/known_hosts.

7. Verify that you have the correct credentials for each IP address listed

  • in quantum/plugins/cisco/credentials.ini. Example:


# Provide the Nexus credentials, if you are using Nexus switches.
# If not this will be ignored.
[1.1.1.1]
username=admin
password=mySecretPasswordForNexus

[2.2.2.2]
username=admin
password=mySecretPasswordForNexus
  • In general, make sure that every Nexus switch used in your system, has a credential entry in the above file. This is required for the system to be able to communicate with those switches.

9. Start the Quantum service. If something doesn't work, verify the

  • your configuration of each of the above files.

How to test the installation

The unit tests are located at quantum/plugins/cisco/tests/unit/v2. They can be executed from the top level Quantum directory using the run_tests.sh script.

1. Testing the core API (without UCS/Nexus/RHEL device sub-plugins configured):

  • By default all the device sub-plugins are disabled (commented out) in etc/quantum/plugins/cisco/cisco_plugins.ini


   ./run_tests.sh quantum.plugins.cisco.tests.unit.v2.test_api_v2
   ./run_tests.sh quantum.plugins.cisco.tests.unit.v2.test_network_plugin

2. For testing the Nexus device sub-plugin perform the following configuration:

  • Edit etc/quantum/plugins/cisco/cisco_plugins.ini to add: In the [PLUGINS] section add:


nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
  • Edit the etc/quantum/plugins/cisco/nexus.ini file. When not using Nexus hardware use the following dummy configuration verbatim:


[SWITCH]
[[1.1.1.1]]
# Hostname of the node
[[[compute-1]]]
# Port this node is connected to on the nexus switch
ports=1/1
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
[[[ssh_port]]]
ssh_port=22

[DRIVER]
name=quantum.plugins.cisco.tests.unit.v2.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver


A Cisco Plugin Framework for Quantum L2 Network Overlays Spanning Multiple Physical Switches (Havana Release and newer)

In the Havana release, Quantum has been renamed to "Neutron". Information on Havana and newer releases may be found here:

https://wiki.openstack.org/wiki/Cisco-neutron

Pre-grizzly support information

The Cisco UCS plugin has been deprecated in the grizzly release (to be bought back in a later release) and support for intelligent multiple switch configuration has been added. If you are using the any release before grizzly then the following information would be of relevance.

Module Structure

  • quantum/plugins/cisco/ - Contains the Network Plugin Framework
    • /client - CLI module for core and extensions API
    • /common - Modules common to the entire plugin
    • /conf - All configuration files
    • /db - Persistence framework
    • /models - Class(es) which tie the logical abstractions to the physical topology
    • /nova - Scheduler and VIF-driver to be used by Nova
    • /nexus - Nexus-specific modules
    • /segmentation - Implementation of segmentation manager, e.g. VLAN Manager
    • /services - Set of orchestration libraries to insert In-path Networking Services
    • /ucs - UCS-specific modules

Plugin Installation Instructions

1. Make a backup copy of quantum/etc/quantum.conf

2. Edit quantum/etc/quantum.conf and edit the "core_plugin" for v2 API

`core_plugin = quantum.plugins.cisco.network_plugin.PluginV2`

3. MySQL database setup:

  • 3a. Create quantum_l2network database in mysql with the following command -

`mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"`

  • 3b. Enter the quantum_l2network database configuration info in the
    • quantum/plugins/cisco/db_conn.ini file.

4. If you want to turn on support for Cisco Nexus switches:

  • 4a. Uncomment the nexus_plugin property in
    • etc/quantum/plugins/cisco/cisco_plugins.ini to read:


[PLUGINS]
nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
  • 4b. Enter the relevant configuration in the
    • etc/quantum/plugins/cisco/nexus.ini file. Example:


[SWITCH]
# Change the following to reflect the IP address of the Nexus switch.
# This will be the address at which Quantum sends and receives configuration
# information via SSHv2.
nexus_ip_address=10.0.0.1
# Port numbers on the Nexus switch to each one of the compute nodes are connected
# Use shortened interface syntax, e.g. "1/10" not "Ethernet1/10" and "," between ports.
ports=1/10,1/11,1/12
#Port number where SSH will be running on the Nexus switch.  Typically this is 22
#unless you've configured your switch otherwise.
nexus_ssh_port=22

[DRIVER]
name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver
  • 4c. Make sure that SSH host key of the Nexus switch is known to the
    • host on which you are running the Quantum service. You can do this simply by logging in to your Quantum host as the user that Quantum runs as and SSHing to the switch at least once. If the host key changes (e.g. due to replacement of the supervisor or clearing of the SSH config on the switch), you may need to repeat this step and remove the old hostkey from ~/.ssh/known_hosts.

5. If your are using UCS blade servers with M81KR Virtual Interface Cards and

  • want to leverage the VM-FEX features, 5a. Uncomment the ucs_plugin propertes in
    • etc/quantum/plugins/cisco/cisco_plugins.ini to read:


[PLUGINS]
ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_plugin_v2.UCSVICPlugin
[INVENTORY]
ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_inventory_v2.UCSInventory

    5b.  Enter the relevant configuration in the
         etc/quantum/plugins/cisco/ucs.ini file.  Example:

[UCSM]
#change the following to the appropriate UCSM IP address
#if you have more than one UCSM, enter info from any one
ip_address=<put_ucsm_ip_address_here>
default_vlan_name=default
default_vlan_id=1
max_ucsm_port_profiles=1024
profile_name_prefix=q-

[DRIVER]
name=quantum.plugins.cisco.ucs.cisco_ucs_network_driver.CiscoUCSMDriver
  • 5c. Configure the UCS systems' information in your deployment by editing the
    • quantum/plugins/cisco/conf/ucs_inventory.ini file. You can configure multiple UCSMs per deployment, multiple chassis per UCSM, and multiple blades per chassis. Chassis ID and blade ID can be obtained from the UCSM (they will typically be numbers like 1, 2, 3, etc.). Also make sure that you put the exact hostname as nova sees it (the host column in the services table of the nova DB will give you that information).


[ucsm-1]
ip_address = <put_ucsm_ip_address_here>
[[chassis-1]]
chassis_id = <put_the_chassis_id_here>
[[[blade-1]]]
blade_id = <put_blade_id_here>
host_name = <put_hostname_here>
[[[blade-2]]]
blade_id = <put_blade_id_here>
host_name = <put_hostname_here>
[[[blade-3]]]
blade_id = <put_blade_id_here>
host_name = <put_hostname_here>

[ucsm-2]
ip_address = <put_ucsm_ip_address_here>
[[chassis-1]]
chassis_id = <put_the_chassis_id_here>
[[[blade-1]]]
blade_id = <put_blade_id_here>
host_name = <put_hostname_here>
[[[blade-2]]]
blade_id = <put_blade_id_here>
host_name = <put_hostname_here>
  • 5d. Configure your OpenStack installation to use the 802.1qbh VIF driver and
    • Quantum-aware scheduler by editing the /etc/nova/nova.conf file with the following entries:


scheduler_driver=quantum.plugins.cisco.nova.quantum_port_aware_scheduler.QuantumPortAwareScheduler
quantum_host=127.0.0.1
quantum_port=9696
libvirt_vif_driver=quantum.plugins.cisco.nova.vifdirect.Libvirt802dot1QbhDriver
libvirt_vif_type=802.1Qbh
  • Note: To be able to bring up a VM on a UCS blade, you should first create a
    • port for that VM using the Quantum create port API. VM creation will fail if an unused port is not available. If you have configured your Nova project with more than one network, Nova will attempt to instantiate the VM with one network interface (VIF) per configured network. To provide plugin points for each of these VIFs, you will need to create multiple Quantum ports, one for each of the networks, prior to starting the VM. However, in this case you will need to use the Cisco multiport extension API instead of the Quantum create port API. More details on using the multiport extension follow in the section on multi NIC support.
To support the above configuration, you will need some Quantum modules. It's easiest to copy the entire quantum directory from your quantum installation into: /usr/lib/python2.7/site-packages/ This needs to be done on each nova compute node.

7. Verify that you have the correct credentials for each IP address listed

  • in quantum/plugins/cisco/conf/credentials.ini. Example:


# Provide the UCSM credentials, create a separte entry for each UCSM used in your system
# UCSM IP address, username and password.
[10.0.0.2]
username=admin
password=mySecretPasswordForUCSM

# Provide the Nexus credentials, if you are using Nexus switches.
# If not this will be ignored.
[10.0.0.1]
username=admin
password=mySecretPasswordForNexus
  • In general, make sure that every UCSM and Nexus switch used in your system, has a credential entry in the above file. This is required for the system to be able to communicate with those switches.

9. Start the Quantum service. If something doesn't work, verify the

  • your configuration of each of the above files.

Multi NIC support for VMs


As indicated earlier, if your Nova setup has a project with more than one network, Nova will try to create a virtual network interface (VIF) on the VM for each of those

As indicated earlier, if your Nova setup has a project with more than one network, Nova will try to create a virtual network interface (VIF) on the VM for each of those networks. Before each VM is instantiated, you should create Quantum ports on each of those networks. These ports need to be created using the following rest call:

POST /1.0/extensions/csco/tenants/{tenant_id}/multiport/

with request body:


{'multiport':
 {'status': 'ACTIVE',
  'net_id_list': net_id_list,
  'ports_desc': {'key': 'value'

</nowiki></pre>


where,

net_id_list is a list of network IDs: [netid1, netid2, ...]. The "ports_desc" dictionary is reserved for later use. For now, the same structure in terms of the dictionary name, key and value should be used.

The corresponding CLI for this operation is as follows:

`PYTHONPATH=. python quantum/plugins/cisco/client/cli.py create_multiport <tenant_id> <net_id1,net_id2,...>`

  • (Note that you should not be using the create port core API in the above case.)

Using an independent plugin as a device sub-plugin

If you would like to use an independent virtual switch plugin as one of the sub-plugins (for eg: the OpenVSwitch plugin) with the nexus device sub-plugin perform the following steps:

(The following instructions are with respect to the OpenVSwitch plugin) 1. Update etc/quantum/plugins/cisco/l2network_plugin.ini

  • In the [MODEL] section of the configuration file put the following configuration (note that this should be the only configuration in this section, all other configuration should be either removed or commented)

2. Update etc/quantum/plugins/cisco/cisco_plugins.ini

  • In the [PLUGINS] section of the configuration file put the following configuration:
`vswitch_plugin=quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2`

3. Set the DB name, the same name has to be configured in three places:

  • In etc/quantum/plugins/cisco/conf/db_conn.ini set the "name" value In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini set the "sql_connection" In etc/quantum/plugins/cisco/conf/db_conn.ini set the "name" value In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini set the "sql_connection" In /etc/quantum/dhcp_agent.ini set the "db_connection"

4. The range of VLAN IDs has to be set in the OpenVSwitch configuration file:

  • In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini Set:


   vlan_min = <lower_id>
   vlan_max = <higher_id>
   enable_tunneling = False

5. For Nexus device sub-plugin configuration refer to the above sections

How to test the installation

The unit tests are located at quantum/tests/unit/cisco/. They can be executed from the top level Quantum directory using tox (
[sudo] pip install pip testrepository
)

1. Testing the core API (without UCS/Nexus/RHEL device sub-plugins configured):

  • By default all the device sub-plugins are disabled (commented out) in etc/quantum/plugins/cisco/cisco_plugins.ini


   tox -e py27 -- quantum.tests.unit.cisco.test_network_plugin
   tox -e py27 -- quantum.tests.unit.cisco.test_nexus_plugin


2. For testing the Nexus device sub-plugin perform the following configuration:

  • Edit etc/quantum/plugins/cisco/cisco_plugins.ini to add: In the [PLUGINS] section add:

`nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin`

  • Edit the etc/quantum/plugins/cisco/nexus.ini file. When not using Nexus hardware use the following dummy configuration verbatim:


[SWITCH]
nexus_ip_address=1.1.1.1
ports=1/10,1/11,1/12
nexus_ssh_port=22
[DRIVER]
name=quantum.plugins.cisco.test.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver
   Or when using Nexus hardware (put the values relevant to your setup):
[SWITCH]
nexus_ip_address=1.1.1.1
ports=1/10,1/11,1/12
nexus_ssh_port=22
[DRIVER]
name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver
  • (Note: Make sure that quantum/plugins/cisco/conf/credentials.ini has an entry for

3. For testing the UCS device sub-plugin perform the following configuration:

  • Edit etc/quantum/plugins/cisco/cisco_plugins.ini to add: In the [PLUGINS] section add:

`ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_plugin_v2.UCSVICPlugin`

  • In the [INVENTORY] section add: When not using UCS hardware:

`ucs_plugin=quantum.plugins.cisco.test.ucs.cisco_ucs_inventory_fake.UCSInventory`

  • Or when using UCS hardware:

`ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_inventory_v2.UCSInventory`

  • Edit the etc/quantum/plugins/cisco/ucs.ini file. When not using UCS hardware:


[DRIVER]
name=quantum.plugins.cisco.test.ucs.fake_ucs_driver.CiscoUCSMFakeDriver
   Or when using UCS hardware:
[DRIVER]
name=quantum.plugins.cisco.ucs.cisco_ucs_network_driver.CiscoUCSMDriver
Copyright: 2013 Cisco Systems, Inc.