Jump to: navigation, search

Difference between revisions of "Cisco-quantum"

Line 1: Line 1:
 
__NOTOC__
 
__NOTOC__
 
= A Cisco Plugin Framework for Quantum Supporting L2 Networks Spannning Multiple Switches =
 
= A Cisco Plugin Framework for Quantum Supporting L2 Networks Spannning Multiple Switches =
 +
 +
=========================================================================================
 +
README for Quantum v2.0:
 +
A Plugin Framework for Supporting Quantum Networks Spannning Multiple Switches
 +
=========================================================================================
  
 
Introduction
 
Introduction
 
------------
 
------------
  
This plugin implementation provides the following capabilities
+
This plugin implementation provides the following capabilities:
to help you take your Layer 2 network for a Quantum leap:
 
  
 
* A reference implementation for a Quantum Plugin Framework
 
* A reference implementation for a Quantum Plugin Framework
Line 13: Line 17:
 
* Supports multiple models of switches concurrently
 
* Supports multiple models of switches concurrently
 
* Supports use of multiple L2 technologies
 
* Supports use of multiple L2 technologies
 +
* Supports the Cisco Nexus family of switches.
 
* Supports Cisco UCS blade servers with M81KR Virtual Interface Cards
 
* Supports Cisco UCS blade servers with M81KR Virtual Interface Cards
 
   (aka "Palo adapters") via 802.1Qbh.
 
   (aka "Palo adapters") via 802.1Qbh.
* Supports the Cisco Nexus family of switches.
 
  
 
Pre-requisites
 
Pre-requisites
Line 21: Line 25:
 
(The following are necessary only when using the UCS and/or Nexus devices in your system.
 
(The following are necessary only when using the UCS and/or Nexus devices in your system.
 
If you plan to just leverage the plugin framework, you do not need these.)
 
If you plan to just leverage the plugin framework, you do not need these.)
* One or more UCS B200 series blade servers with M81KR VIC (aka
 
  Palo adapters) installed.
 
* UCSM 2.0 (Capitola) Build 230 or above.
 
* [[OpenStack]] Diablo D3 or later (should have VIF-driver support)
 
* OS supported:
 
** * RHEL 6.1 or above
 
** * Ubuntu 11.10 or above
 
** * Package: python-configobj-4.6.0-3.el6.noarch (or newer)
 
** * Package: python-routes-1.12.3-2.el6.noarch (or newer)
 
  
 
If you are using a Nexus switch in your topology, you'll need the following
 
If you are using a Nexus switch in your topology, you'll need the following
Line 44: Line 39:
 
* * For more information of ncclient, see:
 
* * For more information of ncclient, see:
 
     http://schmizz.net/ncclient/
 
     http://schmizz.net/ncclient/
 +
 +
* One or more UCS B200 series blade servers with M81KR VIC (aka
 +
  Palo adapters) installed.
 +
* UCSM 2.0 (Capitola) Build 230 or above.
 +
* OS supported:
 +
** * RHEL 6.1 or above
 +
** * Ubuntu 11.10 or above
 +
** * Package: python-configobj-4.6.0-3.el6.noarch (or newer)
 +
** * Package: python-routes-1.12.3-2.el6.noarch (or newer)
 +
** * Package: pip install mysql-python
  
 
Module Structure:
 
Module Structure:
 
-----------------
 
-----------------
* quantum/plugins/cisco/      - Contains the L2-Network Plugin Framework
+
* quantum/plugins/cisco/      - Contains the Network Plugin Framework
 
                       /client - CLI module for core and extensions API
 
                       /client - CLI module for core and extensions API
 
                       /common - Modules common to the entire plugin
 
                       /common - Modules common to the entire plugin
Line 65: Line 70:
 
Plugin Installation Instructions
 
Plugin Installation Instructions
 
----------------------------------
 
----------------------------------
1.  Make a backup copy of quantum/etc/plugins.ini.
+
1.  Make a backup copy of quantum/etc/quantum.conf
  
2.  Edit quantum/etc/plugins.ini and edit the "provider" entry to point
+
2.  Edit quantum/etc/quantum.conf and edit the "core_plugin" for v2 API
    to the [[L2Network]]-plugin:
 
  
provider = quantum.plugins.cisco.l2network_plugin.[[L2Network]]
+
core_plugin = quantum.plugins.cisco.network_plugin.[[PluginV2]]
  
3.  Configure your [[OpenStack]] installation to use the 802.1qbh VIF driver and
+
3.  MySQL database setup:
     Quantum-aware scheduler by editing the /etc/nova/nova.conf file with the
+
     3a. Create quantum_l2network database in mysql with the following command -
    following entries:
 
  
--scheduler_driver=quantum.plugins.cisco.nova.quantum_port_aware_scheduler.[[QuantumPortAwareScheduler]]
+
mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"
--quantum_host=127.0.0.1
 
--quantum_port=9696
 
--libvirt_vif_driver=quantum.plugins.cisco.nova.vifdirect.[[Libvirt802dot1QbhDriver]]
 
--libvirt_vif_type=802.1Qbh
 
  
     Note: To be able to bring up a VM on a UCS blade, you should first create a
+
     3b. Enter the quantum_l2network database configuration info in the
          port for that VM using the Quantum create port API. VM creation will
+
        quantum/plugins/cisco/conf/db_conn.ini file.
          fail if an unused port is not available. If you have configured your
 
          Nova project with more than one network, Nova will attempt to instantiate
 
          the VM with one network interface (VIF) per configured network. To provide
 
          plugin points for each of these VIFs, you will need to create multiple
 
          Quantum ports, one for each of the networks, prior to starting the VM.
 
          However, in this case you will need to use the Cisco multiport extension
 
          API instead of the Quantum create port API. More details on using the
 
          multiport extension follow in the section on multi NIC support.
 
  
4.  To support the above configuration, you will need some Quantum modules. It's easiest
+
4.  If you want to turn on support for Cisco Nexus switches:
    to copy the entire quantum directory from your quantum installation into:
+
     4a.  Uncomment the nexus_plugin property in
 
 
/usr/lib/python2.7/site-packages/
 
 
 
    This needs to be done for each nova compute node.
 
 
 
5.  If you want to turn on support for Cisco Nexus switches:
 
     5a.  Uncomment the nexus_plugin property in
 
 
         etc/quantum/plugins/cisco/cisco_plugins.ini to read:
 
         etc/quantum/plugins/cisco/cisco_plugins.ini to read:
  
nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin.[[NexusPlugin]]
+
[PLUGINS]
 +
nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.[[NexusPlugin]]
  
     5b.  Enter the relevant configuration in the
+
     4b.  Enter the relevant configuration in the
 
         etc/quantum/plugins/cisco/nexus.ini file.  Example:
 
         etc/quantum/plugins/cisco/nexus.ini file.  Example:
  
Line 114: Line 99:
 
<!-- # information via SSHv2. -->
 
<!-- # information via SSHv2. -->
 
nexus_ip_address=10.0.0.1
 
nexus_ip_address=10.0.0.1
<!-- # Port numbers on the Nexus switch to each one of the UCSM 6120s is connected -->
+
<!-- # Port numbers on the Nexus switch to each one of the compute nodes are connected -->
<!-- # Use shortened interface syntax, e.g. "1/10" not "Ethernet1/10". -->
+
<!-- # Use shortened interface syntax, e.g. "1/10" not "Ethernet1/10" and "," between ports. -->
nexus_first_port=1/10
+
ports=1/10,1/11,1/12
nexus_second_port=1/11
 
 
<!-- #Port number where SSH will be running on the Nexus switch.  Typically this is 22 -->
 
<!-- #Port number where SSH will be running on the Nexus switch.  Typically this is 22 -->
 
<!-- #unless you've configured your switch otherwise. -->
 
<!-- #unless you've configured your switch otherwise. -->
Line 125: Line 109:
 
name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver
 
name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver
  
     5c.  Make sure that SSH host key of the Nexus switch is known to the
+
     4c.  Make sure that SSH host key of the Nexus switch is known to the
 
         host on which you are running the Quantum service.  You can do
 
         host on which you are running the Quantum service.  You can do
 
         this simply by logging in to your Quantum host as the user that
 
         this simply by logging in to your Quantum host as the user that
Line 133: Line 117:
 
         this step and remove the old hostkey from ~/.ssh/known_hosts.
 
         this step and remove the old hostkey from ~/.ssh/known_hosts.
  
6Plugin Persistence framework setup:
+
5If your are using UCS blade servers with M81KR Virtual Interface Cards and
     6a.  Create quantum_l2network database in mysql with the following command -
+
     want to leverage the VM-FEX features,
  
mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"
+
    5a.  Uncomment the ucs_plugin propertes in
 +
        etc/quantum/plugins/cisco/cisco_plugins.ini to read:
  
    6b. Enter the quantum_l2network database configuration info in the
+
[PLUGINS]
        quantum/plugins/cisco/conf/db_conn.ini file.
+
ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_plugin_v2.UCSVICPlugin
 +
[INVENTORY]
 +
ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_inventory_v2.UCSInventory
  
     6cIf there is a change in the plugin configuration, service would need
+
     5bEnter the relevant configuration in the
        to be restarted after dropping and re-creating the database using
+
         etc/quantum/plugins/cisco/ucs.ini file.  Example:
         the following commands -
 
  
mysql -u<mysqlusername> -p<mysqlpassword> -e "drop database quantum_l2network"
+
[UCSM]
mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"
+
<!-- #change the following to the appropriate UCSM IP address -->
 +
<!-- #if you have more than one UCSM, enter info from any one -->
 +
ip_address=<put_ucsm_ip_address_here>
 +
default_vlan_name=default
 +
default_vlan_id=1
 +
max_ucsm_port_profiles=1024
 +
profile_name_prefix=q-
  
7.  Verify that you have the correct credentials for each IP address listed
+
[DRIVER]
    in quantum/plugins/cisco/conf/credentials.ini.  Example:
+
name=quantum.plugins.cisco.ucs.cisco_ucs_network_driver.CiscoUCSMDriver
 
 
<!-- # Provide the UCSM credentials, create a separte entry for each UCSM used in your system -->
 
<!-- # UCSM IP address, username and password. -->
 
[10.0.0.2]
 
username=admin
 
password=mySecretPasswordForUCSM
 
 
 
<!-- # Provide the Nexus credentials, if you are using Nexus switches. -->
 
<!-- # If not this will be ignored. -->
 
[10.0.0.1]
 
username=admin
 
password=mySecretPasswordForNexus
 
 
 
    In general, make sure that every UCSM and Nexus switch  used in your system,
 
    has a credential entry in the above file. This is required for the system to
 
    be able to communicate with those switches.
 
  
8.  Configure the UCS systems' information in your deployment by editing the
+
    5c.  Configure the UCS systems' information in your deployment by editing the
    quantum/plugins/cisco/conf/ucs_inventory.ini file. You can configure multiple
+
        quantum/plugins/cisco/conf/ucs_inventory.ini file. You can configure multiple
    UCSMs per deployment, multiple chassis per UCSM, and multiple blades per
+
        UCSMs per deployment, multiple chassis per UCSM, and multiple blades per
    chassis. Chassis ID and blade ID can be obtained from the UCSM (they will
+
        chassis. Chassis ID and blade ID can be obtained from the UCSM (they will
    typically be numbers like 1, 2, 3, etc.). Also make sure that you put the exact
+
        typically be numbers like 1, 2, 3, etc.). Also make sure that you put the exact
    hostname as nova sees it (the host column in the services table of the nova
+
        hostname as nova sees it (the host column in the services table of the nova
    DB will give you that information).
+
        DB will give you that information).
  
 
[ucsm-1]
 
[ucsm-1]
Line 200: Line 176:
 
host_name = <put_hostname_here>
 
host_name = <put_hostname_here>
  
9.  Start the Quantum service.  If something doesn't work, verify that
+
    5d. Configure your [[OpenStack]] installation to use the 802.1qbh VIF driver and
     your configuration of each of the above files hasn't gone a little kaka.
+
        Quantum-aware scheduler by editing the /etc/nova/nova.conf file with the
    Once you've put right what once went wrong, leap on.
+
        following entries:
 +
 
 +
scheduler_driver=quantum.plugins.cisco.nova.quantum_port_aware_scheduler.[[QuantumPortAwareScheduler]]
 +
quantum_host=127.0.0.1
 +
quantum_port=9696
 +
libvirt_vif_driver=quantum.plugins.cisco.nova.vifdirect.[[Libvirt802dot1QbhDriver]]
 +
libvirt_vif_type=802.1Qbh
 +
 
 +
    Note: To be able to bring up a VM on a UCS blade, you should first create a
 +
          port for that VM using the Quantum create port API. VM creation will
 +
          fail if an unused port is not available. If you have configured your
 +
          Nova project with more than one network, Nova will attempt to instantiate
 +
          the VM with one network interface (VIF) per configured network. To provide
 +
          plugin points for each of these VIFs, you will need to create multiple
 +
          Quantum ports, one for each of the networks, prior to starting the VM.
 +
          However, in this case you will need to use the Cisco multiport extension
 +
          API instead of the Quantum create port API. More details on using the
 +
          multiport extension follow in the section on multi NIC support.
 +
 
 +
    To support the above configuration, you will need some Quantum modules. It's easiest
 +
    to copy the entire quantum directory from your quantum installation into:
 +
 
 +
    /usr/lib/python2.7/site-packages/
 +
 
 +
    This needs to be done on each nova compute node.
 +
 
 +
7.  Verify that you have the correct credentials for each IP address listed
 +
    in quantum/plugins/cisco/conf/credentials.ini.  Example:
 +
 
 +
<!-- # Provide the UCSM credentials, create a separte entry for each UCSM used in your system -->
 +
<!-- # UCSM IP address, username and password. -->
 +
[10.0.0.2]
 +
username=admin
 +
password=mySecretPasswordForUCSM
 +
 
 +
<!-- # Provide the Nexus credentials, if you are using Nexus switches. -->
 +
<!-- # If not this will be ignored. -->
 +
[10.0.0.1]
 +
username=admin
 +
password=mySecretPasswordForNexus
 +
 
 +
    In general, make sure that every UCSM and Nexus switch  used in your system,
 +
    has a credential entry in the above file. This is required for the system to
 +
    be able to communicate with those switches.
 +
 
 +
9.  Start the Quantum service.  If something doesn't work, verify the
 +
     your configuration of each of the above files.
  
 
Multi NIC support for VMs
 
Multi NIC support for VMs
Line 208: Line 230:
 
As indicated earlier, if your Nova setup has a project with more than one network,
 
As indicated earlier, if your Nova setup has a project with more than one network,
 
Nova will try to create a virtual network interface (VIF) on the VM for each of those
 
Nova will try to create a virtual network interface (VIF) on the VM for each of those
networks. That implies -
 
  
    (1) You should create the same number of networks in Quantum as in your Nova
+
As indicated earlier, if your Nova setup has a project with more than one network,
        project.
+
Nova will try to create a virtual network interface (VIF) on the VM for each of those
 
+
networks. Before each VM is instantiated, you should create Quantum ports on each of
    (2) Before each VM is instantiated, you should create Quantum ports on each of those
+
those networks. These ports need to be created using the following rest call:
        networks. These ports need to be created using the following rest call:
 
  
 
POST /1.0/extensions/csco/tenants/{tenant_id}/multiport/
 
POST /1.0/extensions/csco/tenants/{tenant_id}/multiport/
Line 238: Line 258:
 
     (Note that you should not be using the create port core API in the above case.)
 
     (Note that you should not be using the create port core API in the above case.)
  
Using the Command Line Client to work with this Plugin
+
Using an independent plugin as a device sub-plugin
------------------------------------------------------
+
-------------------------------------------------
A command line client is packaged with this plugin. This module can be used
 
to invoke the core API as well as the extensions API, so that you don't have
 
to switch between different CLI modules (it internally invokes the Quantum
 
CLI module for the core APIs to ensure consistency when using either). This
 
command line client can be invoked as follows:
 
 
 
PYTHONPATH=.:tools python quantum/plugins/cisco/client/cli.py
 
 
 
1.  Creating the network
 
 
 
<!-- # PYTHONPATH=. python quantum/plugins/cisco/client/cli.py create_net -H 10.10.2.6 demo net1 -->
 
Created a new Virtual Network with ID: c4a2bea7-a528-4caf-b16e-80397cd1663a
 
for Tenant demo
 
 
 
2.  Listing the networks
 
 
 
<!-- # PYTHONPATH=. python quantum/plugins/cisco/client/cli.py list_nets -H 10.10.2.6 demo -->
 
Virtual Networks for Tenant demo
 
    Network ID: 0e85e924-6ef6-40c1-9f7a-3520ac6888b3
 
    Network ID: c4a2bea7-a528-4caf-b16e-80397cd1663a
 
 
 
3.  Creating one port on each of the networks
 
 
 
<!-- # PYTHONPATH=. python quantum/plugins/cisco/client/cli.py create_multiport -H 10.10.2.6 demo c4a2bea7-a528-4caf-b16e-80397cd1663a,0e85e924-6ef6-40c1-9f7a-3520ac6888b3 -->
 
Created ports: {u'ports': [{u'id': u'118ac473-294d-480e-8f6d-425acbbe81ae'}, {u'id': u'996e84b8-2ed3-40cf-be75-de17ff1214c4'}]}
 
 
 
4.  List all the ports on a network
 
 
 
<!-- # PYTHONPATH=. python quantum/plugins/cisco/client/cli.py list_ports -H 10.10.2.6 demo c4a2bea7-a528-4caf-b16e-80397cd1663a -->
 
Ports on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a
 
for Tenant: demo
 
    Logical Port: 118ac473-294d-480e-8f6d-425acbbe81ae
 
  
5.  Show the details of a port
+
If you would like to use an independent virtual switch plugin as one of the sub-plugins
 +
(for eg: the OpenVSwitch plugin) with the nexus device sub-plugin perform the following steps:
  
<!-- # PYTHONPATH=. python quantum/plugins/cisco/client/cli.py show_port -H 10.10.2.6 demo c4a2bea7-a528-4caf-b16e-80397cd1663a 118ac473-294d-480e-8f6d-425acbbe81ae -->
+
(The following instructions are with respect to the OpenVSwitch plugin)
Logical Port ID: 118ac473-294d-480e-8f6d-425acbbe81ae
+
1. Update etc/quantum/plugins/cisco/l2network_plugin.ini
administrative State: ACTIVE
+
  In the [MODEL] section of the configuration file put the following configuration
interface: <none>
+
  (note that this should be the only configuration in this section, all other configuration
on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a
+
  should be either removed or commented)
for Tenant: demo
 
  
6.  Start the VM instance using Nova
+
     model_class=quantum.plugins.cisco.models.virt_phy_sw_v2.[[VirtualPhysicalSwitchModelV2]]
     Note that when using UCS and the 802.1Qbh features, the association of the
 
    VIF-ID (also referred to as interface ID) on the VM's NIC with a port will
 
    happen automatically when the VM is instantiated. At this point, doing a
 
    show_port will reveal the VIF-ID associated with the port. To indicate that
 
    this VIF-ID is still detached from the network it would eventually be on, you
 
    will see the suffix "(detached)" on the VIF-ID. This indicates that although
 
    the VIF-ID and the port have been associated, the VIF still does not have
 
    connectivity to the network on which the port resides. That connectivity
 
    will be established only after the plug/attach operation is performed (as
 
    described in the next step).
 
  
<!-- # PYTHONPATH=. python quantum/plugins/cisco/client/cli.py show_port demo c4a2bea7-a528-4caf-b16e-80397cd1663a 118ac473-294d-480e-8f6d-425acbbe81ae -->
+
2. Update etc/quantum/plugins/cisco/cisco_plugins.ini
Logical Port ID: 118ac473-294d-480e-8f6d-425acbbe81ae
+
  In the [PLUGINS] section of the configuration file put the following configuration:
administrative State: ACTIVE
 
interface: b73e3585-d074-4379-8dde-931c0fc4db0e(detached)
 
on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a
 
for Tenant: demo
 
  
7. Plug interface and port into the network
+
  vswitch_plugin=quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
    Use the interface information obtained in step 6 to plug the interface into
 
    the network.
 
  
<!-- # PYTHONPATH=. python quantum/plugins/cisco/client/cli.py plug_iface demo c4a2bea7-a528-4caf-b16e-80397cd1663a 118ac473-294d-480e-8f6d-425acbbe81ae b73e3585-d074-4379-8dde-931c0fc4db0e -->
+
3. Set the DB name, the same name has to be configured in three places:
Plugged interface b73e3585-d074-4379-8dde-931c0fc4db0e
+
  In etc/quantum/plugins/cisco/conf/db_conn.ini set the "name" value
into Logical Port: 118ac473-294d-480e-8f6d-425acbbe81ae
+
  In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini set the "sql_connection"
on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a
 
for Tenant: demo
 
  
8. Unplug an interface and port from the network
+
  In etc/quantum/plugins/cisco/conf/db_conn.ini set the "name" value
 +
  In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini set the "sql_connection"
 +
  In /etc/quantum/dhcp_agent.ini set the "db_connection"
  
<!-- # PYTHONPATH=. python quantum/plugins/cisco/client/cli.py unplug_iface demo c4a2bea7-a528-4caf-b16e-80397cd1663a 118ac473-294d-480e-8f6d-425acbbe81ae -->
+
4. The range of VLAN IDs has to be set in the OpenVSwitch configuration file:
Unplugged interface from Logical Port: 118ac473-294d-480e-8f6d-425acbbe81ae
+
  In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a
+
  Set:
for Tenant: demo
+
  vlan_min = <lower_id>
 +
  vlan_max = <higher_id>
 +
  enable_tunneling = False
  
    Note: After unplugging, if you check the details of the port, you will
+
5. For Nexus device sub-plugin configuration refer to the above sections
    see the VIF-IF associated with the port (but now suffixed with the state
 
    "detached"). At this point, it is possible to plug the VIF into the network
 
    again making use of the same VIF-ID. In general, once associated, the VIF-ID
 
    cannot be disassociated with the port until the VM is terminated. After the
 
    VM is terminated, the VIF-ID will be automatically disassociated from the
 
    port. To summarize, association and disassociation of the VIF-ID with a port
 
    happens automatically at the time of creating and terminating the VM. The
 
    connectivity of the VIF to the network is controlled by the user via the
 
    plug and unplug operations.
 
  
 
How to test the installation
 
How to test the installation
 
----------------------------
 
----------------------------
The unit tests are located at quantum/plugins/cisco/tests/unit. They can be
+
The unit tests are located at quantum/plugins/cisco/tests/unit/v2. They can be
executed from the main folder using the run_tests.sh or to get a more detailed
+
executed from the top level Quantum directory using the run_tests.sh script.
result the run_tests.py script.
 
 
 
1. All unit tests (needs environment setup as indicated in the pre-requisites):
 
 
 
  ./run_tests.sh -N quantum.plugins.cisco.tests.unit
 
 
 
  or by modifying the environment variable to point to the plugin directory
 
 
 
      In bash : export PLUGIN_DIR=quantum/plugins/cisco
 
          tcsh/csh : setenv PLUGIN_DIR quantum/plugins/cisco
 
 
 
      ./run_tests.sh -N
 
 
 
  Another option is to execute the python script run_tests.py
 
 
 
  python run_tests.py quantum.plugins.cisco.tests.unit
 
  
2. Testing the core API (without UCS/Nexus/RHEL hardware, and can be run on
+
1. Testing the core API (without UCS/Nexus/RHEL device sub-plugins configured):
  Ubuntu):
+
   By default all the device sub-plugins are disabled (commented out) in
   Device-specific plugins can be disabled by commenting out the entries in:
 
 
   etc/quantum/plugins/cisco/cisco_plugins.ini
 
   etc/quantum/plugins/cisco/cisco_plugins.ini
  The Core API can be tested by initially disabling all device plugins, then
 
  enabling just the UCS plugins, and finally enabling both the UCS and the
 
  Nexus plugins.
 
  Execute the test script as follows:
 
  
   ./run_tests.sh -N quantum.plugins.cisco.tests.unit.test_l2networkApi
+
   ./run_tests.sh quantum.plugins.cisco.tests.unit.v2.test_api_v2
 +
  ./run_tests.sh quantum.plugins.cisco.tests.unit.v2.test_network_plugin
  
  or
+
2. For testing the Nexus device sub-plugin perform the following configuration:
  
   python run_tests.py quantum.plugins.cisco.tests.unit.test_l2networkApi
+
   Edit etc/quantum/plugins/cisco/cisco_plugins.ini to add:
 +
  In the [PLUGINS] section add:
 +
nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.[[NexusPlugin]]
  
3. Specific Plugin unit test (needs environment setup as indicated in the
+
  Edit the etc/quantum/plugins/cisco/nexus.ini file.
  pre-requisites):
+
  When not using Nexus hardware use the following dummy configuration verbatim:
 +
[SWITCH]
 +
nexus_ip_address=1.1.1.1
 +
ports=1/10,1/11,1/12
 +
nexus_ssh_port=22
 +
[DRIVER]
 +
name=quantum.plugins.cisco.tests.unit.v2.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver
 +
  Or when using Nexus hardware (put the values relevant to your setup):
 +
[SWITCH]
 +
nexus_ip_address=1.1.1.1
 +
ports=1/10,1/11,1/12
 +
nexus_ssh_port=22
 +
[DRIVER]
 +
name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver
  
   ./run_tests.sh -N quantum.plugins.cisco.tests.unit.<name_of_the_module>
+
   (Note: Make sure that quantum/plugins/cisco/conf/credentials.ini has an entry for
  
  or
+
3. For testing the UCS device sub-plugin perform the following configuration:
  
   python run_tests.py quantum.plugins.cisco.tests.unit.<name_of_the_module>
+
   Edit etc/quantum/plugins/cisco/cisco_plugins.ini to add:
  E.g.:
+
  In the [PLUGINS] section add:
 +
ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_plugin_v2.UCSVICPlugin
  
   python run_tests.py quantum.plugins.cisco.tests.unit.test_ucs_plugin
+
   In the [INVENTORY] section add:
 +
  When not using UCS hardware:
 +
ucs_plugin=quantum.plugins.cisco.tests.unit.v2.ucs.cisco_ucs_inventory_fake.UCSInventory
 +
  Or when using UCS hardware:
 +
ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_inventory_v2.UCSInventory
  
   To run specific tests, use the following:
+
   Edit the etc/quantum/plugins/cisco/ucs.ini file.
  python run_tests.py
+
   When not using UCS hardware:
    quantum.plugins.cisco.tests.unit.<name_of_the_module>:<[[ClassName]]>.<funcName>
+
[DRIVER]
 
+
name=quantum.plugins.cisco.tests.unit.v2.ucs.fake_ucs_driver.CiscoUCSMFakeDriver
   Eg:
+
   Or when using UCS hardware:
  python run_tests.py
+
[DRIVER]
    quantum.plugins.cisco.tests.unit.test_ucs_plugin:UCSVICTestPlugin.test_create_port
+
name=quantum.plugins.cisco.ucs.cisco_ucs_network_driver.CiscoUCSMDriver
 
 
4. Testing the Extension API
 
  The script is placed alongwith the other cisco unit tests. The location may
 
   change later.
 
  Location quantum/plugins/cisco/tests/unit/test_cisco_extension.py
 
 
 
  The script can be executed by :
 
    ./run_tests.sh -N quantum.plugins.cisco.tests.unit.test_cisco_extension
 
 
 
    or
 
  
    python run_tests.py quantum.plugins.cisco.tests.unit.test_cisco_extension
+
:Copyright: 2012 Cisco Systems, Inc.

Revision as of 18:11, 17 September 2012

A Cisco Plugin Framework for Quantum Supporting L2 Networks Spannning Multiple Switches

=============================================================================

README for Quantum v2.0: A Plugin Framework for Supporting Quantum Networks Spannning Multiple Switches

=============================================================================

Introduction


This plugin implementation provides the following capabilities:

  • A reference implementation for a Quantum Plugin Framework

(For details see: http://wiki.openstack.org/quantum-multi-switch-plugin)

  • Supports multiple switches in the network
  • Supports multiple models of switches concurrently
  • Supports use of multiple L2 technologies
  • Supports the Cisco Nexus family of switches.
  • Supports Cisco UCS blade servers with M81KR Virtual Interface Cards
 (aka "Palo adapters") via 802.1Qbh.

Pre-requisites


(The following are necessary only when using the UCS and/or Nexus devices in your system. If you plan to just leverage the plugin framework, you do not need these.)

If you are using a Nexus switch in your topology, you'll need the following NX-OS version and packages to enable Nexus support:

  • NX-OS 5.2.1 (Delhi) Build 69 or above.
  • paramiko library - SSHv2 protocol library for python
  • ncclient v0.3.1 - Python library for NETCONF clients
    • * You need a version of ncclient modifed by Cisco Systems.
    To get it, from your shell prompt do:
    git clone git@github.com:CiscoSystems/ncclient.git
    sudo python ./setup.py install
  • * For more information of ncclient, see:
    http://schmizz.net/ncclient/
  • One or more UCS B200 series blade servers with M81KR VIC (aka
 Palo adapters) installed.
  • UCSM 2.0 (Capitola) Build 230 or above.
  • OS supported:
    • * RHEL 6.1 or above
    • * Ubuntu 11.10 or above
    • * Package: python-configobj-4.6.0-3.el6.noarch (or newer)
    • * Package: python-routes-1.12.3-2.el6.noarch (or newer)
    • * Package: pip install mysql-python

Module Structure:


  • quantum/plugins/cisco/ - Contains the Network Plugin Framework
                      /client - CLI module for core and extensions API
                      /common - Modules common to the entire plugin
                      /conf   - All configuration files
                      /db     - Persistence framework
                      /models - Class(es) which tie the logical abstractions
                                to the physical topology
                      /nova   - Scheduler and VIF-driver to be used by Nova
                      /nexus  - Nexus-specific modules
                      /segmentation - Implementation of segmentation manager,
                                      e.g. VLAN Manager
                      /services - Set of orchestration libraries to insert
                                  In-path Networking Services
                      /tests  - Tests specific to this plugin
                      /ucs    - UCS-specific modules

Plugin Installation Instructions


1. Make a backup copy of quantum/etc/quantum.conf

2. Edit quantum/etc/quantum.conf and edit the "core_plugin" for v2 API

core_plugin = quantum.plugins.cisco.network_plugin.PluginV2

3. MySQL database setup:

   3a.  Create quantum_l2network database in mysql with the following command -

mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"

   3b.  Enter the quantum_l2network database configuration info in the
        quantum/plugins/cisco/conf/db_conn.ini file.

4. If you want to turn on support for Cisco Nexus switches:

   4a.  Uncomment the nexus_plugin property in
        etc/quantum/plugins/cisco/cisco_plugins.ini to read:

[PLUGINS] nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin

   4b.  Enter the relevant configuration in the
        etc/quantum/plugins/cisco/nexus.ini file.  Example:

[SWITCH] nexus_ip_address=10.0.0.1 ports=1/10,1/11,1/12 nexus_ssh_port=22

[DRIVER] name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver

   4c.  Make sure that SSH host key of the Nexus switch is known to the
        host on which you are running the Quantum service.  You can do
        this simply by logging in to your Quantum host as the user that
        Quantum runs as and SSHing to the switch at least once.  If the
        host key changes (e.g. due to replacement of the supervisor or
        clearing of the SSH config on the switch), you may need to repeat
        this step and remove the old hostkey from ~/.ssh/known_hosts.

5. If your are using UCS blade servers with M81KR Virtual Interface Cards and

   want to leverage the VM-FEX features,
   5a.  Uncomment the ucs_plugin propertes in
        etc/quantum/plugins/cisco/cisco_plugins.ini to read:

[PLUGINS] ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_plugin_v2.UCSVICPlugin [INVENTORY] ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_inventory_v2.UCSInventory

   5b.  Enter the relevant configuration in the
        etc/quantum/plugins/cisco/ucs.ini file.  Example:

[UCSM] ip_address=<put_ucsm_ip_address_here> default_vlan_name=default default_vlan_id=1 max_ucsm_port_profiles=1024 profile_name_prefix=q-

[DRIVER] name=quantum.plugins.cisco.ucs.cisco_ucs_network_driver.CiscoUCSMDriver

   5c.  Configure the UCS systems' information in your deployment by editing the
        quantum/plugins/cisco/conf/ucs_inventory.ini file. You can configure multiple
        UCSMs per deployment, multiple chassis per UCSM, and multiple blades per
        chassis. Chassis ID and blade ID can be obtained from the UCSM (they will
        typically be numbers like 1, 2, 3, etc.). Also make sure that you put the exact
        hostname as nova sees it (the host column in the services table of the nova
        DB will give you that information).

[ucsm-1] ip_address = <put_ucsm_ip_address_here> chassis-1 chassis_id = <put_the_chassis_id_here> [[[blade-1]]] blade_id = <put_blade_id_here> host_name = <put_hostname_here> [[[blade-2]]] blade_id = <put_blade_id_here> host_name = <put_hostname_here> [[[blade-3]]] blade_id = <put_blade_id_here> host_name = <put_hostname_here>

[ucsm-2] ip_address = <put_ucsm_ip_address_here> chassis-1 chassis_id = <put_the_chassis_id_here> [[[blade-1]]] blade_id = <put_blade_id_here> host_name = <put_hostname_here> [[[blade-2]]] blade_id = <put_blade_id_here> host_name = <put_hostname_here>

   5d. Configure your OpenStack installation to use the 802.1qbh VIF driver and
       Quantum-aware scheduler by editing the /etc/nova/nova.conf file with the
       following entries:

scheduler_driver=quantum.plugins.cisco.nova.quantum_port_aware_scheduler.QuantumPortAwareScheduler quantum_host=127.0.0.1 quantum_port=9696 libvirt_vif_driver=quantum.plugins.cisco.nova.vifdirect.Libvirt802dot1QbhDriver libvirt_vif_type=802.1Qbh

   Note: To be able to bring up a VM on a UCS blade, you should first create a
         port for that VM using the Quantum create port API. VM creation will
         fail if an unused port is not available. If you have configured your
         Nova project with more than one network, Nova will attempt to instantiate
         the VM with one network interface (VIF) per configured network. To provide
         plugin points for each of these VIFs, you will need to create multiple
         Quantum ports, one for each of the networks, prior to starting the VM.
         However, in this case you will need to use the Cisco multiport extension
         API instead of the Quantum create port API. More details on using the
         multiport extension follow in the section on multi NIC support.
   To support the above configuration, you will need some Quantum modules. It's easiest
   to copy the entire quantum directory from your quantum installation into:
   /usr/lib/python2.7/site-packages/
   This needs to be done on each nova compute node.

7. Verify that you have the correct credentials for each IP address listed

   in quantum/plugins/cisco/conf/credentials.ini.  Example:

[10.0.0.2] username=admin password=mySecretPasswordForUCSM

[10.0.0.1] username=admin password=mySecretPasswordForNexus

   In general, make sure that every UCSM and Nexus switch  used in your system,
   has a credential entry in the above file. This is required for the system to
   be able to communicate with those switches.

9. Start the Quantum service. If something doesn't work, verify the

   your configuration of each of the above files.

Multi NIC support for VMs


As indicated earlier, if your Nova setup has a project with more than one network, Nova will try to create a virtual network interface (VIF) on the VM for each of those

As indicated earlier, if your Nova setup has a project with more than one network, Nova will try to create a virtual network interface (VIF) on the VM for each of those networks. Before each VM is instantiated, you should create Quantum ports on each of those networks. These ports need to be created using the following rest call:

POST /1.0/extensions/csco/tenants/{tenant_id}/multiport/

with request body:

{'multiport':

{'status': 'ACTIVE',
 'net_id_list': net_id_list,
 'ports_desc': {'key': 'value'</nowiki></pre>


where,

net_id_list is a list of network IDs: [netid1, netid2, ...]. The "ports_desc" dictionary is reserved for later use. For now, the same structure in terms of the dictionary name, key and value should be used.

The corresponding CLI for this operation is as follows:

PYTHONPATH=. python quantum/plugins/cisco/client/cli.py create_multiport <tenant_id> <net_id1,net_id2,...>

   (Note that you should not be using the create port core API in the above case.)

Using an independent plugin as a device sub-plugin


If you would like to use an independent virtual switch plugin as one of the sub-plugins (for eg: the OpenVSwitch plugin) with the nexus device sub-plugin perform the following steps:

(The following instructions are with respect to the OpenVSwitch plugin) 1. Update etc/quantum/plugins/cisco/l2network_plugin.ini

  In the [MODEL] section of the configuration file put the following configuration
  (note that this should be the only configuration in this section, all other configuration
  should be either removed or commented)
   model_class=quantum.plugins.cisco.models.virt_phy_sw_v2.VirtualPhysicalSwitchModelV2

2. Update etc/quantum/plugins/cisco/cisco_plugins.ini

  In the [PLUGINS] section of the configuration file put the following configuration:
  vswitch_plugin=quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2

3. Set the DB name, the same name has to be configured in three places:

  In etc/quantum/plugins/cisco/conf/db_conn.ini set the "name" value
  In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini set the "sql_connection"
  In etc/quantum/plugins/cisco/conf/db_conn.ini set the "name" value
  In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini set the "sql_connection"
  In /etc/quantum/dhcp_agent.ini set the "db_connection"

4. The range of VLAN IDs has to be set in the OpenVSwitch configuration file:

  In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
  Set:
  vlan_min = <lower_id>
  vlan_max = <higher_id>
  enable_tunneling = False

5. For Nexus device sub-plugin configuration refer to the above sections

How to test the installation


The unit tests are located at quantum/plugins/cisco/tests/unit/v2. They can be executed from the top level Quantum directory using the run_tests.sh script.

1. Testing the core API (without UCS/Nexus/RHEL device sub-plugins configured):

  By default all the device sub-plugins are disabled (commented out) in
  etc/quantum/plugins/cisco/cisco_plugins.ini
  ./run_tests.sh quantum.plugins.cisco.tests.unit.v2.test_api_v2
  ./run_tests.sh quantum.plugins.cisco.tests.unit.v2.test_network_plugin

2. For testing the Nexus device sub-plugin perform the following configuration:

  Edit etc/quantum/plugins/cisco/cisco_plugins.ini to add:
  In the [PLUGINS] section add:

nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin

  Edit the etc/quantum/plugins/cisco/nexus.ini file.
  When not using Nexus hardware use the following dummy configuration verbatim:

[SWITCH] nexus_ip_address=1.1.1.1 ports=1/10,1/11,1/12 nexus_ssh_port=22 [DRIVER] name=quantum.plugins.cisco.tests.unit.v2.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver

  Or when using Nexus hardware (put the values relevant to your setup):

[SWITCH] nexus_ip_address=1.1.1.1 ports=1/10,1/11,1/12 nexus_ssh_port=22 [DRIVER] name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver

  (Note: Make sure that quantum/plugins/cisco/conf/credentials.ini has an entry for

3. For testing the UCS device sub-plugin perform the following configuration:

  Edit etc/quantum/plugins/cisco/cisco_plugins.ini to add:
  In the [PLUGINS] section add:

ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_plugin_v2.UCSVICPlugin

  In the [INVENTORY] section add:
  When not using UCS hardware:

ucs_plugin=quantum.plugins.cisco.tests.unit.v2.ucs.cisco_ucs_inventory_fake.UCSInventory

  Or when using UCS hardware:

ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_inventory_v2.UCSInventory

  Edit the etc/quantum/plugins/cisco/ucs.ini file.
  When not using UCS hardware:

[DRIVER] name=quantum.plugins.cisco.tests.unit.v2.ucs.fake_ucs_driver.CiscoUCSMFakeDriver

  Or when using UCS hardware:

[DRIVER] name=quantum.plugins.cisco.ucs.cisco_ucs_network_driver.CiscoUCSMDriver

Copyright: 2012 Cisco Systems, Inc.