Jump to: navigation, search

Mellanox-Quantum

Revision as of 07:49, 28 August 2013 by Ophir Maor (talk | contribs) (Configuring Qunatum Server and Compute Nodes)


Overview

Mellanox Quantum Plugin

The Openstack Mellanox Quantum plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. Mellanox Quantum Plugin allows hardware vNICs (based on SR-IOV virtual functions) per each Virtual Machine vNIC to have its unique connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).

Hardware-based switching, provides better performance, functionality, and security/isolation for virtual cloud environments. Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality

This plugin is implemented according to Plugin-Agent pattern.


   	         +-----------------+                       +--------------+
                 | Controller node |                       | Compute node |
       +-----------------------------------+     +-----------------------------------+
       |  +-----------+      +----------+  |     |  +----------+       +----------+  |
       |  |           |      |          |  |     |  |          |  zmq  |          |  |
       |  | Openstack | v2.0 | Mellanox |  | RPC |  | Mellanox |REQ/REP| Mellanox |  |
       |  | Quantum   +------+ Quantum  +-----------+ Quantum  +-------+ Embedded |  |
       |  |           |      | Plugin   |  |     |  | Agent    |       | Switch   |  |
       |  |           |      |          |  |     |  |          |       | (NIC)    |  |
       |  +-----------+      +----------+  |     |  +----------+       +----------+  |
       +-----------------------------------+     +-----------------------------------+
  • Openstack Mellanox Quantum Plugin implements the Quantum v2.0 API.
  • Mellanox Quantum Plugin processes the Quantum API calls and manages network segmentation ID allocation.
  • The plugin uses databases to store configuration and allocation mapping.
  • The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.
  • Mellanox Openstack Quantum Agent (L2 Agent) runs on each compute node.
  • Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.

Mellanox Nova VIF Driver

The Mellanox Nova VIF driver should be used when running Mellanox Quantum Plugin. This driver supports the VIF plugin by binding vNIC (para-virtualized or SR-IOV with optional RDMA guest access) to the embedded switch port.

Prerequisites

The following are the Mellanox Quantum Plugin prerequisites:

1. The software package python-zmq (github) must be installed. EPEL repository can be used as well.

2. python-setuptools (use "yum install python-setuptools")

3. python-pip (use "yum install python-pip")

Prerequisites only for Compute Servers :

1. RH 6.3 or above.

2. Compute nodes should be equiped with Mellanox ConnectX®-3 Network Adapter (link)

3. Mellanox OFED 2.0.3 is installed. Contact openstack@mellanox.com to retreive this version. Refer to Mellanox website for the latest OFED documentation (link)

4. Enable SR-IOV on ConnectX-3 card. Refer to Mellanox Community

5. The software package iproute2 - (Code Documentation) must be installed. Reuired only to be installed on compute nodes

6. The software package ethtool (Code) must be installed (version 3.8 or higher). Required only to be installed on compute nodes.

7. oslo.config (use "pip-python install oslo.config"). Reuired only to be installed on compute nodes.

8. python-ethtool and python-zmq packages

Installation with RHOS

For each section refer to 'Installation via RPMs'. No additional Prerequisites except for those required by RHOS.

Please refer to http://www.redhat.com/products/enterprise-linux/openstack-platform For additional information regarding RHOS.

Code Structure

Mellanox Quantum Plugin and the Nova VIF driver are located at Mellanox OpenStack RPMs

1. Quantum Plugin package structure:

  quantum/etc/quantum/plugins/mlnx -plugin configuration
  mlnx_conf.ini - sample plugin configuration
  quantum/quantum/plugins/mlnx -  plugin code
  /agent - Agent code
  /common - common  code
  /db - plugin persistency model and wrapping methods
  mlnx_plugin.py - Mellanox Openstack Plugin
  rpc_callbacks.py - RPC handler for received messages
  agent_notify_api.py - Agent RPC notify methods

Mellanox Quantum Plugin is located under quantum/quantum/plugins/.

2. Nova VIF driver package structure is:

  nova/nova/mlnx - nova vif driver code

Mellanox Nova VIF driver is located under nova/virt/libvirt/mlnx.

Mellanox Quantum Plugin Installation (for Grizzly)

On the Controller Node

In /etc/nova/nova.conf ensure you have the following line:

security_group_api=nova

If you changed it - Restart the Nova services (e.g. cd /etc/init.d && for i in $( ls nova-*); do service $i restart; done)

On the Quantum Server Node

Creating MySQL database for Quantum Server

Quantum server uses MySQL database. Make sure you have running MySQL database for quantum.

If Quantum server is already running, you should stop it.

  #/etc/init.d/quantum-server stop

If you want to use a database that was created you have to drop it and create another one. For example:

  mysql -u root -ppassword <<EOF
  drop database if exists quantum;
  create database quantum;
  EOF

If you want to create a new one:

  mysql -u root -ppassword <<EOF
  drop database if exists quantum;
  create database quantum;
  GRANT ALL PRIVILEGES ON quantum.* TO 'quantum'@'localhost' IDENTIFIED BY 'password';
  GRANT ALL PRIVILEGES ON quantum.* TO 'quantum' IDENTIFIED BY 'password';
  FLUSH PRIVILEGES;
  EOF

Installation via RPMs - for RH6.3 and above

1. Download the Mellanox Quantum Plugin RPM from Mellanox Community OpenStack PRMs for Grizzly

   #wget http://www.mellanox.com/downloads/solutions/openstack/grizzly/0.3/openstack-quantum-mlnx-2013.1.2-3.el6.noarch.rpm

2. Install the required RPMs for the Quantum server:

  #yum install openstack-quantum-mlnx-2013.1.2-3.el6.noarch.rpm

3. Modify the /etc/quantum/quantum.conf file.

  core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin

4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment. Click here for configuration options.

5. Change the soft link to the plugin configuration (plugin.ini)

  #unlink /etc/quantum/plugin.ini 
  #ln -s /etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugin.ini

6. Start the quantum server

  #/etc/init.d/quantum-server start

Manual installation - for other OS.

1. Make sure the Quantum server is installed and is stopped

  #/etc/init.d/quantum-server stop

2. Copy Mellanox OpenStack plugin to the installed quantum plugin directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins).

  #git clone https://github.com/mellanox-openstack/mellanox-quantum-plugin
  #cd mellanox-quantum-plugin
  #git checkout v0.3
  #cp -a quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins

3. Modify the /etc/quantum/quantum.conf file.

  core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin

4. Copy the Mellanox plugin configuration.

  #mkdir -p /etc/quantum/plugins/mlnx
  #cp quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx

5. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment. Click here for configuration options.

6. If you run the Quantum server using init script - change the quantum server configuration to point to Mellanox Quantum Plugin

For example, In Ubuntu change /etc/default/quantum-server

QUANTUM_PLUGIN_CONFIG="/etc/quantum/plugins/mlnx/mlnx_conf.ini"

7 . Run the server.

  #quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini

or

  #/etc/init.d/quantum-server start

On Compute Nodes

The eswitchd Daemon

1. Download the eswitchd RPM:

#wget http://www.mellanox.com/downloads/solutions/openstack/grizzly/0.3/eswitchd-0.3-1.x86_64.rpm

2. Install the eswitchd RPM:

#yum install eswitchd-0.3-1.x86_64.rpm

3. Configure /etc/eswitchd/eswitchd.conf if needed

Please Refer to Mellanox Community for the eSwitchd installation notes ( click here)

Nova-compute

1. Make sure nova is installed on your server.

2. Download and copy the Nova Mellanox VIF driver.

   #git clone https://github.com/mellanox-openstack/mellanox-quantum-plugin
   #cd mellanox-quantum-plugin
   #git checkout v0.3
   #cp -a nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt

or

  #wget https://github.com/mellanox-openstack/mellanox-quantum-plugin/archive/v0.3.tar.gz
  #tar zxvf v0.3
  #cp –a mellanox-quantum-plugin-0.3/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt

3. Modify /etc/nova/nova.conf

   compute_driver=nova.virt.libvirt.driver.LibvirtDriver
   libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver
   fabric=default - specifies physical network for vNICs (currently support one fabric per node)
   security_group_api=nova

4. Restart Nova.

  #/etc/init.d/openstack-nova-compute restart

Quantum Agent

Installation via RPMs - for RH6.3 and above

1. Download the RPMs from Mellanox Community

#wget http://www.mellanox.com/downloads/solutions/openstack/grizzly/0.3/openstack-quantum-mlnx-2013.1.2-3.el6.noarch.rpm

2. Install the required RPMs for the Quantum agent:

  #yum install openstack-quantum-mlnx-2013.1.2-3.el6.noarch.rpm

3. Copy the /etc/quantum/quantum.conf file from the Quantum server, and adjust it if needed.

4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment. Click here for configuration options.

5. Start eswitch Daemon

   #/etc/init.d/eswitchd start

6.. Start the quantum agent

  #/etc/init.d/quantum-mlnx-agent start

Note: eswitch Daemon should be running before the Quantum Agent is started.

Mellanox Quantum Plugin Configuration

Quantum Configuration

1. Make the Mellanox plugin the current quantum plugin by editing quantum.conf and changing core_plugin.

  core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin

2. Database configuration: Install MySQL on the central server. Create a database named "quantum".

3. Plugin configuration: Edit the configuration file: /etc/quantum/plugins/mlnx/mlnx_conf.ini

On central server node

  [DATABASE]
  sql_connection - The parameter should be changed to the connection of the Quantum DB used by the plugin.
                   It must match the mysql configuration. For example:
                   "mysql://quantum:password@127.0.0.1:3306/quantum"
  reconnect_interval = 2 (default)


  [MLNX]
  tenant_network_type - must be set on of supported tenant network types.  
                        Possible values: 'vlan' for Ethernet or 'ib' for Infiniband.'vlan' is the default.
  network_vlan_ranges - must be configured to specify the names of the physical networks
                        managed by the mellanox plugin, along with the ranges of VLAN IDs
                        available on each physical network for allocation to virtual networks. 
                        Possible range is vlan range is 1-4093. 
                        The default  is "default:1:100" 
                        <fabric name >:<vlan range start>:<vlan end range> 

On compute node(s)

  [AGENT]
  polling_interval - Polling interval (in seconds) for existing vNICs. The default is 2 seconds.
  rpc  - must be set to 'True'
  [ESWITCH]
  physical_interface_mapping -  the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. The format of this paramter is:     <fabric name>:< PF name> (Only releavant on Compute node). PF Name can either be the PF (Physical Function) Name or 'autoeth' for automatic Ethernet configuration,'autoib' for automatic Infiniband configuration.The default is  "default:autoeth".  
  vnic_type - type of VM network interface: 'direct' or 'hostdev' according to libvirt terminology.
        hostdev: this is the traditional method of assigning any generic PCI device to a guest (SR-IOV).
        direct: this is a method to provide macvtap device on top of the PCI device (default).
        bridge:  - When using Linux Bridge Plugin on top of eIPoIB device
  daemon_endpoint - eswitch daemon end point connection (URL) (default value='tcp://127.0.0.1:5001')
  request_timeout - the number of milliseconds the agent will wait for response on request to daemon. (default=3000 msec)

For a plugin configuration file example (Grizzly), please refer to Mellanox config *ini file.

Nova Configuration (Compute Node(s))


Edit the nova.conf file. 1. Configure the vif driver, and libvirt/vif type

  compute_driver=nova.virt.libvirt.driver.LibvirtDriver
  connection_type=libvirt
  libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver

2. Configure vnic_type ('direct' or 'hostdev'). This will be the default vNIC type when creating a new vNIC. (for example, when using the dashboard it to create new vNIC, this will be the default type.

  vnic_type= direct 

3. Define the embedded switch-managed physical network (currently single fabric on node).

  fabric=default - specifies physical network for vNICs

4. Enable DHCP server to allow VMs to acquire IPs.

  quantum_use_dhcp=true

Network Node

Network node equipped with Mellanox Connectx-3 adapter card should be configured as follows:

1. Install Mellanox OFED.

2. Install Quantum linux bridge plugin and Quantum dhcp l3-agent:

  #yum install openstack-quantum  
  #yum install openstack-quantum-linuxbridge

3. Change the following configuration of the ini file (/etc/quantum/plugins/linuxbridge/linuxbridge_conf.ini)

  physical_interface_mappings = default:eth2 (when eth2 is normally the name for Ethernet interface of Mellanox adapters)

4. Configure the DHCP server accodring to the following guidelines

http://docs.openstack.org/trunk/openstack-network/admin/content/adv_cfg_dhcp_agent.html

5. Start the DHCP server

  #/etc/init.d/quantum-dhcp-agent start
  #/etc/init.d/quantum-linuxbridge-agent start

InfiniBand support

The Mellanox Quantum Plugin use InfiniBand Partitions (PKeys) to seperate Networks.

Configuring Partitions

There are two options to configure InfiniBand partitions

1. Manual configuration: All the PKeys should be predefined in the partitions.conf file (/etc/opensm/partitions.conf)

2. Automatic configuration: (Future)

Add/Change the following in the partitions.conf file

  management=0xffff,ipoib, sl=0, defmember=both : ALL, ALL_SWITCHES=full,SELF=full;

For every network you want to configure in Quantum you have to configure the pkey assisicated with the VLAN of this network (defined in Quantum).

  net1=0x1, ipoib, sl=0, defmember=full : SELF;

For example: If we have this configuration in /etc/quantum/plugins/mlnx/mlnx_conf.ini

  [MLNX]
  network_vlan_ranges = default:1:10

We'll have the following configuration of the the partitions.conf file:

  management=0xffff,ipoib, sl=0, defmember=both : ALL, ALL_SWITCHES=full,SELF=full;
  net1=0x1, ipoib, sl=0, defmember=full : SELF;
  net2=0x2, ipoib, sl=0, defmember=full : SELF;
  net3=0x3, ipoib, sl=0, defmember=full : SELF;
  net4=0x4, ipoib, sl=0, defmember=full : SELF;
  net5=0x5, ipoib, sl=0, defmember=full : SELF;
  net6=0x6, ipoib, sl=0, defmember=full : SELF;
  net7=0x7, ipoib, sl=0, defmember=full : SELF;
  net8=0x8, ipoib, sl=0, defmember=full : SELF;
  net9=0x9, ipoib, sl=0, defmember=full : SELF;
  net10=0xa, ipoib, sl=0, defmember=full : SELF;

Change the following in /etc/opensm/opensm.conf:

  part_enforce both


Restart the openSM

  #/etc/init.d/opensmd restart

Configuring Qunatum Server and Compute Nodes

SR-IOV (passthrough mode)

On the controller, change the following in the file /etc/quantum/plugins/mlnx/mlnx_conf.ini

  tenant_network_type = ib
  vnic_type = hostdev
  network_vlan_ranges = default:1:10 
  (sql_connection and reconnect_interval  can be configured as described above)

The mapping between VLAN and PKEY is as follows: VLAN X = PKEY 0x8000 + X. For example: vlan 2 is pkey 0x8002

On the Compute Node, change the following: In The file /etc/quantum/plugins/mlnx/mlnx_conf.ini

   physical_interface_mapping = default:autoib

Tenant_network_type , vnic_type and network_vlan_ranges parameters should be configured as the controller.

autoib can be replace by the name of the PF.


Change the file /etc/eswitchd/eswitchd.conf

  fabrics  = default:autoib (or default:ib0)

eswitch should be started and then quantum agent should be started

  /etc/init.d/eswitchd restart
  /etc/init.d/quantum-mlnx-agent restart

Verify Mellanox VIF driver is configured in /etc/nova/nova.conf

  libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver

Restart nova

  #/etc/init.d/openstack-nova-compute restart

Para-Virtualized

Using the Mellanox Quantum Plugin on the controller and the Linux bridge plugin on the compute node

On the controller, configure the file /etc/quantum/plugins/mlnx/mlnx_conf.ini

  tenant_network_type = ib
  vnic_type = bridge

Here we use the Linux Bridge plugin.

eIPoIB module should be up and configured - Please refer to eIPoIB configuration in Mellanox OFED User Manual Once we have the eIPoIB interface, we use it in the linux bridge agent configuration:

For Example: Assuming eth1 is the eIPoIB interface:

To check that the interface type is eIPoIB run the command

  #ethtool -i <interface> 

verify that the driver is "eth_ipoib"

Change the file /etc/quantum/plugins/linuxbridge/linuxbridge_conf.ini

  [linux_bridge] 
  physical_interface_mappings: default:eth1   

Configure the Linux Bridge VIF Driver in /etc/nova/nova.conf

  #libvirt_vif_driver=nova.virt.libvirt.vif.QuantumLinuxBridgeVIFDriver

Restart nova

  #/etc/init.d/openstack-nova-compute restart

After confiuration Linux Bridge agent should be restarted.

  #/etc/init.d/quantum-linuxbridge-agent restart

Configuring Network Node

Usage Examples

References

1. http://www.mellanox.com/openstack/

2. Source repository

3. Mellanox OFED

4. Mellanox OpenStack Solution Reference Architecture

For more details, please refer your question to openstack@mellanox.com

Return to Mellanox-OpenStack wiki page.