Jump to: navigation, search

Cisco-quantum

Revision as of 22:59, 3 April 2012 by Snaiksat (talk)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

A Cisco Plugin Framework for Quantum Supporting L2 Networks Spannning Multiple Switches

Introduction


This plugin implementation provides the following capabilities to help you take your Layer 2 network for a Quantum leap:

  • A reference implementation for a Quantum Plugin Framework

(For details see: http://wiki.openstack.org/quantum-multi-switch-plugin)

  • Supports multiple switches in the network
  • Supports multiple models of switches concurrently
  • Supports use of multiple L2 technologies
  • Supports Cisco UCS blade servers with M81KR Virtual Interface Cards
 (aka "Palo adapters") via 802.1Qbh.
  • Supports the Cisco Nexus family of switches.

It does not provide:

  • A hologram of Al that only you can see.
  • A map to help you find your way through time.
  • A cure for amnesia or your swiss-cheesed brain.

Let's leap in!

Pre-requisites


(The following are necessary only when using the UCS and/or Nexus devices in your system. If you plan to just leverage the plugin framework, you do not need these.)

  • One or more UCS B200 series blade servers with M81KR VIC (aka
 Palo adapters) installed.
  • UCSM 2.0 (Capitola) Build 230 or above.
  • OpenStack Diablo D3 or later (should have VIF-driver support)
  • OS supported:
    • * RHEL 6.1 or above
    • * Ubuntu 11.10 or above
    • * Package: python-configobj-4.6.0-3.el6.noarch (or newer)
    • * Package: python-routes-1.12.3-2.el6.noarch (or newer)

If you are using a Nexus switch in your topology, you'll need the following NX-OS version and packages to enable Nexus support:

  • NX-OS 5.2.1 (Delhi) Build 69 or above.
  • paramiko library - SSHv2 protocol library for python
  • ncclient v0.3.1 - Python library for NETCONF clients
    • * You need a version of ncclient modifed by Cisco Systems.
    To get it, from your shell prompt do:
    git clone git@github.com:CiscoSystems/ncclient.git
    sudo python ./setup.py install
  • * For more information of ncclient, see:
    http://schmizz.net/ncclient/

Module Structure:


  • quantum/plugins/cisco/ - Contains the L2-Network Plugin Framework
                      /client - CLI module for core and extensions API
                      /common - Modules common to the entire plugin
                      /conf   - All configuration files
                      /db     - Persistence framework
                      /models - Class(es) which tie the logical abstractions
                                to the physical topology
                      /nova   - Scheduler and VIF-driver to be used by Nova
                      /nexus  - Nexus-specific modules
                      /segmentation - Implementation of segmentation manager,
                                      e.g. VLAN Manager
                      /services - Set of orchestration libraries to insert
                                  In-path Networking Services
                      /tests  - Tests specific to this plugin
                      /ucs    - UCS-specific modules

Plugin Installation Instructions


1. Make a backup copy of quantum/etc/plugins.ini.

2. Edit quantum/etc/plugins.ini and edit the "provider" entry to point

   to the L2Network-plugin:

provider = quantum.plugins.cisco.l2network_plugin.L2Network

3. Configure your OpenStack installation to use the 802.1qbh VIF driver and

   Quantum-aware scheduler by editing the /etc/nova/nova.conf file with the
   following entries:

--scheduler_driver=quantum.plugins.cisco.nova.quantum_port_aware_scheduler.QuantumPortAwareScheduler --quantum_host=127.0.0.1 --quantum_port=9696 --libvirt_vif_driver=quantum.plugins.cisco.nova.vifdirect.Libvirt802dot1QbhDriver --libvirt_vif_type=802.1Qbh

   Note: To be able to bring up a VM on a UCS blade, you should first create a
         port for that VM using the Quantum create port API. VM creation will
         fail if an unused port is not available. If you have configured your
         Nova project with more than one network, Nova will attempt to instantiate
         the VM with one network interface (VIF) per configured network. To provide
         plugin points for each of these VIFs, you will need to create multiple
         Quantum ports, one for each of the networks, prior to starting the VM.
         However, in this case you will need to use the Cisco multiport extension
         API instead of the Quantum create port API. More details on using the
         multiport extension follow in the section on multi NIC support.

4. To support the above configuration, you will need some Quantum modules. It's easiest

   to copy the entire quantum directory from your quantum installation into:

/usr/lib/python2.7/site-packages/

   This needs to be done for each nova compute node.

5. If you want to turn on support for Cisco Nexus switches:

   5a.  Uncomment the nexus_plugin property in
        etc/quantum/plugins/cisco/cisco_plugins.ini to read:

nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin.NexusPlugin

   5b.  Enter the relevant configuration in the
        etc/quantum/plugins/cisco/nexus.ini file.  Example:

[SWITCH] nexus_ip_address=10.0.0.1 nexus_first_port=1/10 nexus_second_port=1/11 nexus_ssh_port=22

[DRIVER] name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver

   5c.  Make sure that SSH host key of the Nexus switch is known to the
        host on which you are running the Quantum service.  You can do
        this simply by logging in to your Quantum host as the user that
        Quantum runs as and SSHing to the switch at least once.  If the
        host key changes (e.g. due to replacement of the supervisor or
        clearing of the SSH config on the switch), you may need to repeat
        this step and remove the old hostkey from ~/.ssh/known_hosts.

6. Plugin Persistence framework setup:

   6a.  Create quantum_l2network database in mysql with the following command -

mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"

   6b.  Enter the quantum_l2network database configuration info in the
        quantum/plugins/cisco/conf/db_conn.ini file.
   6c.  If there is a change in the plugin configuration, service would need
        to be restarted after dropping and re-creating the database using
        the following commands -

mysql -u<mysqlusername> -p<mysqlpassword> -e "drop database quantum_l2network" mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"

7. Verify that you have the correct credentials for each IP address listed

   in quantum/plugins/cisco/conf/credentials.ini.  Example:

[10.0.0.2] username=admin password=mySecretPasswordForUCSM

[10.0.0.1] username=admin password=mySecretPasswordForNexus

   In general, make sure that every UCSM and Nexus switch  used in your system,
   has a credential entry in the above file. This is required for the system to
   be able to communicate with those switches.

8. Configure the UCS systems' information in your deployment by editing the

   quantum/plugins/cisco/conf/ucs_inventory.ini file. You can configure multiple
   UCSMs per deployment, multiple chassis per UCSM, and multiple blades per
   chassis. Chassis ID and blade ID can be obtained from the UCSM (they will
   typically be numbers like 1, 2, 3, etc.). Also make sure that you put the exact
   hostname as nova sees it (the host column in the services table of the nova
   DB will give you that information).

[ucsm-1] ip_address = <put_ucsm_ip_address_here> chassis-1 chassis_id = <put_the_chassis_id_here> [[[blade-1]]] blade_id = <put_blade_id_here> host_name = <put_hostname_here> [[[blade-2]]] blade_id = <put_blade_id_here> host_name = <put_hostname_here> [[[blade-3]]] blade_id = <put_blade_id_here> host_name = <put_hostname_here>

[ucsm-2] ip_address = <put_ucsm_ip_address_here> chassis-1 chassis_id = <put_the_chassis_id_here> [[[blade-1]]] blade_id = <put_blade_id_here> host_name = <put_hostname_here> [[[blade-2]]] blade_id = <put_blade_id_here> host_name = <put_hostname_here>

9. Start the Quantum service. If something doesn't work, verify that

   your configuration of each of the above files hasn't gone a little kaka.
   Once you've put right what once went wrong, leap on.

Multi NIC support for VMs


As indicated earlier, if your Nova setup has a project with more than one network, Nova will try to create a virtual network interface (VIF) on the VM for each of those networks. That implies -

   (1) You should create the same number of networks in Quantum as in your Nova
       project.
   (2) Before each VM is instantiated, you should create Quantum ports on each of those
       networks. These ports need to be created using the following rest call:

POST /1.0/extensions/csco/tenants/{tenant_id}/multiport/

with request body:

{'multiport':

{'status': 'ACTIVE',
 'net_id_list': net_id_list,
 'ports_desc': {'key': 'value'</nowiki></pre>


where,

net_id_list is a list of network IDs: [netid1, netid2, ...]. The "ports_desc" dictionary is reserved for later use. For now, the same structure in terms of the dictionary name, key and value should be used.

The corresponding CLI for this operation is as follows:

PYTHONPATH=. python quantum/plugins/cisco/client/cli.py create_multiport <tenant_id> <net_id1,net_id2,...>

   (Note that you should not be using the create port core API in the above case.)

Using the Command Line Client to work with this Plugin


A command line client is packaged with this plugin. This module can be used to invoke the core API as well as the extensions API, so that you don't have to switch between different CLI modules (it internally invokes the Quantum CLI module for the core APIs to ensure consistency when using either). This command line client can be invoked as follows:

PYTHONPATH=.:tools python quantum/plugins/cisco/client/cli.py

1. Creating the network

Created a new Virtual Network with ID: c4a2bea7-a528-4caf-b16e-80397cd1663a for Tenant demo

2. Listing the networks

Virtual Networks for Tenant demo

   Network ID: 0e85e924-6ef6-40c1-9f7a-3520ac6888b3
   Network ID: c4a2bea7-a528-4caf-b16e-80397cd1663a

3. Creating one port on each of the networks

Created ports: {u'ports': [{u'id': u'118ac473-294d-480e-8f6d-425acbbe81ae'}, {u'id': u'996e84b8-2ed3-40cf-be75-de17ff1214c4'}]}

4. List all the ports on a network

Ports on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a for Tenant: demo

   Logical Port: 118ac473-294d-480e-8f6d-425acbbe81ae

5. Show the details of a port

Logical Port ID: 118ac473-294d-480e-8f6d-425acbbe81ae administrative State: ACTIVE interface: <none> on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a for Tenant: demo

6. Start the VM instance using Nova

   Note that when using UCS and the 802.1Qbh features, the association of the
   VIF-ID (also referred to as interface ID) on the VM's NIC with a port will
   happen automatically when the VM is instantiated. At this point, doing a
   show_port will reveal the VIF-ID associated with the port. To indicate that
   this VIF-ID is still detached from the network it would eventually be on, you
   will see the suffix "(detached)" on the VIF-ID. This indicates that although
   the VIF-ID and the port have been associated, the VIF still does not have
   connectivity to the network on which the port resides. That connectivity
   will be established only after the plug/attach operation is performed (as
   described in the next step).

Logical Port ID: 118ac473-294d-480e-8f6d-425acbbe81ae administrative State: ACTIVE interface: b73e3585-d074-4379-8dde-931c0fc4db0e(detached) on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a for Tenant: demo

7. Plug interface and port into the network

   Use the interface information obtained in step 6 to plug the interface into
   the network.

Plugged interface b73e3585-d074-4379-8dde-931c0fc4db0e into Logical Port: 118ac473-294d-480e-8f6d-425acbbe81ae on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a for Tenant: demo

8. Unplug an interface and port from the network

Unplugged interface from Logical Port: 118ac473-294d-480e-8f6d-425acbbe81ae on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a for Tenant: demo

   Note: After unplugging, if you check the details of the port, you will
   see the VIF-IF associated with the port (but now suffixed with the state
   "detached"). At this point, it is possible to plug the VIF into the network
   again making use of the same VIF-ID. In general, once associated, the VIF-ID
   cannot be disassociated with the port until the VM is terminated. After the
   VM is terminated, the VIF-ID will be automatically disassociated from the
   port. To summarize, association and disassociation of the VIF-ID with a port
   happens automatically at the time of creating and terminating the VM. The
   connectivity of the VIF to the network is controlled by the user via the
   plug and unplug operations.

How to test the installation


The unit tests are located at quantum/plugins/cisco/tests/unit. They can be executed from the main folder using the run_tests.sh or to get a more detailed result the run_tests.py script.

1. All unit tests (needs environment setup as indicated in the pre-requisites):

  ./run_tests.sh -N quantum.plugins.cisco.tests.unit
  or by modifying the environment variable to point to the plugin directory
      In bash : export PLUGIN_DIR=quantum/plugins/cisco
         tcsh/csh : setenv PLUGIN_DIR quantum/plugins/cisco
      ./run_tests.sh -N
  Another option is to execute the python script run_tests.py
  python run_tests.py quantum.plugins.cisco.tests.unit

2. Testing the core API (without UCS/Nexus/RHEL hardware, and can be run on

  Ubuntu):
  Device-specific plugins can be disabled by commenting out the entries in:
  etc/quantum/plugins/cisco/cisco_plugins.ini
  The Core API can be tested by initially disabling all device plugins, then
  enabling just the UCS plugins, and finally enabling both the UCS and the
  Nexus plugins.
  Execute the test script as follows:
  ./run_tests.sh -N quantum.plugins.cisco.tests.unit.test_l2networkApi
  or
  python run_tests.py quantum.plugins.cisco.tests.unit.test_l2networkApi

3. Specific Plugin unit test (needs environment setup as indicated in the

  pre-requisites):
  ./run_tests.sh -N quantum.plugins.cisco.tests.unit.<name_of_the_module>
  or
  python run_tests.py quantum.plugins.cisco.tests.unit.<name_of_the_module>
  E.g.:
  python run_tests.py quantum.plugins.cisco.tests.unit.test_ucs_plugin
  To run specific tests, use the following:
  python run_tests.py
   quantum.plugins.cisco.tests.unit.<name_of_the_module>:<ClassName>.<funcName>
  Eg:
  python run_tests.py
   quantum.plugins.cisco.tests.unit.test_ucs_plugin:UCSVICTestPlugin.test_create_port

4. Testing the Extension API

  The script is placed alongwith the other cisco unit tests. The location may
  change later.
  Location quantum/plugins/cisco/tests/unit/test_cisco_extension.py
  The script can be executed by :
   ./run_tests.sh -N quantum.plugins.cisco.tests.unit.test_cisco_extension
   or
   python run_tests.py quantum.plugins.cisco.tests.unit.test_cisco_extension