Jump to: navigation, search

Mellanox-Neutron-Havana

Revision as of 14:23, 23 December 2013 by Itzikb (talk | contribs) (Prerequisites)


Overview

Mellanox Neutron Plugin

The Openstack Mellanox Neutron plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. Mellanox Neutron Plugin allows hardware vNICs (based on SR-IOV virtual functions) per each Virtual Machine vNIC to have its unique connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).

Hardware-based switching, provides better performance, functionality, and security/isolation for virtual cloud environments. Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality

This plugin is implemented according to Plugin-Agent pattern.


   	         +-----------------+                       +--------------+
                 | Controller node |                       | Compute node |
       +-----------------------------------+     +-----------------------------------+
       |  +-----------+      +----------+  |     |  +----------+       +----------+  |
       |  |           |      |          |  |     |  |          |  zmq  |          |  |
       |  | Openstack | v2.0 | Mellanox |  | RPC |  | Mellanox |REQ/REP| Mellanox |  |
       |  | Neutron   +------+ Neutron  +-----------+ Neutron  +-------+ Embedded |  |
       |  |           |      | Plugin   |  |     |  | Agent    |       | Switch   |  |
       |  |           |      |          |  |     |  |          |       | (NIC)    |  |
       |  +-----------+      +----------+  |     |  +----------+       +----------+  |
       +-----------------------------------+     +-----------------------------------+
  • Openstack Mellanox Neutron Plugin implements the Neutron v2.0 API.
  • Mellanox Neutron Plugin processes the Neutron API calls and manages network segmentation ID allocation.
  • The plugin uses databases to store configuration and allocation mapping.
  • The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.
  • Mellanox Openstack Neutron Agent (L2 Agent) runs on each compute node.
  • Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.

Mellanox Nova VIF Driver

The Mellanox Nova VIF driver should be used when running Mellanox Neutron Plugin. This driver supports the VIF plugin by binding vNIC (para-virtualized or SR-IOV with optional RDMA guest access) to the embedded switch port. VIF Driver for Para Virtualized mode is included in Nova. For SR-IOV pass-through one needs to use VIF driver from Mellanox git repository or RPM.

Prerequisites

Installation with Red Hat Enterprise Linux OpenStack Platform

Make sure you follow Red Hat prerequisites as required Red Hat Enterprise Linux OpenStack Platform. Please refer to https://access.redhat.com/products/Cloud/OpenStack/ for additional information regarding this product. Other Red Hat references: 1. RHOS reference document

2. RDO QuickStart

Neutron server

No special prerequisites needed.

Compute Nodes

1. python-pip (use "yum install python-pip")

2. Compute nodes should be equipped with Mellanox ConnectX®-3 Network Adapter (link)

3. Mellanox OFED 2.0.3 or greater is installed.Refer to Mellanox website for the latest OFED ([1])

4. Enable SR-IOV on ConnectX-3 card. Refer to Mellanox Community

5. The software package iproute2 - (Code Documentation) must be installed. Required only to be installed on compute nodes

6. The software package ethtool (Code) must be installed (version 3.8 or higher). Required only to be installed on compute nodes.

7. oslo.config (use "pip-python install oslo.config"). Reuired only to be installed on compute nodes.


Network Node

1. Compute nodes should be equipped with Mellanox ConnectX®-3 Network Adapter (link)

2. Mellanox OFED 2.0.3 or greater is installed. Refer to Mellanox website for the latest OFED ([2])

Ethernet Network

Controller Node

In /etc/nova/nova.conf ensure you have the following line:

  security_group_api=nova

If you changed it - restart the nova services (e.g. cd /etc/init.d && for i in $( ls openstack-nova-*); do service $i restart; done)

Neutron Server Node

Installation

Note: In case you are using Red Hat Enterprise Linux OpenStack (RHOS) Platform, RH6.4 is a minimum requirement.

1. Neutron server uses MySQL database. Make sure you have running MySQL database for neutron.

If Neutron server is already running, you should stop it.

  #/etc/init.d/neutron-server stop

If you want to use a database that was created you have to drop it and create another one. For example:

  mysql -u root -ppassword <<EOF
  drop database if exists neutron;
  create database neutron;
  EOF

If you want to create a new one:

  mysql -u root -ppassword <<EOF
  drop database if exists neutron;
  create database neutron;
  GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'password';
  GRANT ALL PRIVILEGES ON neutron.* TO 'neutron' IDENTIFIED BY 'password';
  FLUSH PRIVILEGES;
  EOF


2. Install the required RPM for the Mellanox Neutron plugin:

  #yum install openstack-neutron-mellanox

3. Modify the /etc/neutron/neutron.conf file.

  core_plugin = neutron.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin

4. Modify the /etc/neutron/plugins/mlnx/mlnx_conf.ini file to reflect your environment. Click here for configuration options.

5. Change the soft link to the plugin configuration (plugin.ini)

  #unlink /etc/neutron/plugin.ini 
  #ln -s /etc/neutron/plugins/mlnx/mlnx_conf.ini /etc/neutron/plugin.ini

Configuration

1. Make the Mellanox plugin the current Neutron plugin by editing neutron.conf and changing core_plugin.

  core_plugin = neutron.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin

2. Plugin configuration: Edit the configuration file: /etc/neutron/plugins/mlnx/mlnx_conf.ini

  [DATABASE]
  sql_connection - The parameter should be changed to the connection of the Neutron DB used by the plugin.
                   It must match the mysql configuration. For example:
                   "mysql://neutron:password@127.0.0.1:3306/neutron"
  reconnect_interval = 2 (default)


  [MLNX]
  tenant_network_type - must be set on of supported tenant network types.  
                        Possible values: 'vlan' for Ethernet or 'ib' for Infiniband.'vlan' is the default.
  network_vlan_ranges - must be configured to specify the names of the physical networks
                        managed by the mellanox plugin, along with the ranges of VLAN IDs
                        available on each physical network for allocation to virtual networks. 
                        Possible range is vlan range is 1-4093. 
                        The default  is "default:1:100" 
                        <fabric name >:<vlan range start>:<vlan end range>
   [agent]
   # Agent's polling interval in seconds
   # polling_interval = 2
   # (BoolOpt) Enable server RPC compatibility with old (pre-havana)
   # agents.
   #
   rpc_support_old_agents = True

Note: rpc_support_old_agents should be set to 'True' (non default).

Start Services

Start (or restart) the Neutron server

  #/etc/init.d/neutron-server start

Compute Nodes

Installation

1. If you didn't download Mellanox OpenStack repo file download it :

#wget http://www.mellanox.com/downloads/solutions/openstack/havana/repo/mlnx-havana/mlnx-havana.repo -O /etc/yum.repos.d/mlnx-havana.repo

2. Install the eswitchd RPM:

#yum install eswitchd

3. In case you would like to use Ethernet in para-virtualized mode the VIF driver is already included in Nova package. Otherwise, Install Mellanox VIF driver (Make sure nova is installed on your server)

#yum install mlnxvif

4. Install the required RPM for the Neutron agent:

#yum install openstack-neutron-mellanox

Configuration

1. Configure /etc/eswitchd/eswitchd.conf if needed

Please Refer to Mellanox Community for the eSwitchd installation notes ( click here)

2. Modify /etc/nova/nova.conf

   compute_driver=nova.virt.libvirt.driver.LibvirtDriver
   libvirt_vif_driver=mlnxvif.vif.MlxEthVIFDriver
   security_group_api=nova
   connection_type=libvirt

In case you didn't install the Mellanox VIF driver, and you plan to use Ethernet only in paravirtualized ,mode: change the following:

   libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver

3. Modify the /etc/neutron/plugins/mlnx/mlnx_conf.ini file to reflect your environment.

  [AGENT]
  polling_interval - Polling interval (in seconds) for existing vNICs. The default is 2 seconds.
  rpc  - must be set to 'True'
  [ESWITCH]
  physical_interface_mapping -  the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. The format of this paramter is:     <fabric name>:< PF name> (Only releavant on Compute node). PF Name can either be the PF (Physical Function) Name or 'autoeth' for automatic Ethernet configuration,'autoib' for automatic Infiniband configuration.The default is  "default:autoeth".  
  vnic_type - type of VM network interface: 'mlnx_direct' or 'hostdev' according to libvirt terminology.
        hostdev: this is the traditional method of assigning any generic PCI device to a guest (SR-IOV).
        mlnx_direct: this is a method to provide macvtap device on top of the PCI device (default).
        bridge:  - When using Linux Bridge Plugin on top of eIPoIB device
  daemon_endpoint - eswitch daemon end point connection (URL) (default value='tcp://127.0.0.1:5001')
                    daemon_endpoint should be changed to tcp://127.0.0.1:60001
  request_timeout - the number of milliseconds the agent will wait for response on request to daemon. (default=3000 msec)

Note: daemon_endpoint should be changed to tcp://127.0.0.1:60001

For a plugin configuration file example (Havana), please refer to Mellanox config *ini file.

4. Enable DHCP server to allow VMs to acquire IPs.

  neutron_use_dhcp=true

Start Services

1. Restart Nova.

  #/etc/init.d/openstack-nova-compute restart

2. Start eswitch Daemon

   #/etc/init.d/eswitchd start

3. Start the Neutron agent

  #/etc/init.d/neutron-mlnx-agent start

Note: eswitch Daemon should be running before the Neutron Agent is started.

Network Node

Network node equipped with Mellanox Connectx-3 adapter card should be configured as follows:

1. Install Neutron linux bridge plugin and Neutron dhcp l3-agent:

  #yum install openstack-neutron-linuxbridge

2. Change the following configuration of the ini file (/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini)

  physical_interface_mappings = default:eth2 (when eth2 is normally the name for Ethernet interface of Mellanox adapters)

3. Configure the DHCP server accodring to the following guidelines

http://docs.openstack.org/trunk/openstack-network/admin/content/adv_cfg_dhcp_agent.html

4. Start the DHCP server

  #/etc/init.d/neutron-linuxbridge-agent start
  #/etc/init.d/neutron-dhcp-agent start

InfiniBand Network

The Mellanox Neutron Plugin use InfiniBand Partitions (PKeys) to separate Networks.

SM Configuration

All the PKeys should be predefined in the partitions.conf file (/etc/opensm/partitions.conf) (Automatic cofiguration is planned in future phase)

Add/Change the following in the partitions.conf file

  management=0xffff,ipoib, sl=0, defmember=both : ALL, ALL_SWITCHES=full,SELF=full;

For every network you want to configure in Neutron you have to configure the pkey associated with the VLAN of this network (defined in Neutron).

  vlan1=0x1, ipoib, sl=0, defmember=full : SELF;

For example: If we have 10 VLANs defined in configuration in /etc/neutron/plugins/mlnx/mlnx_conf.ini

  [MLNX]
  network_vlan_ranges = default:1:10

We'll have the following configuration of the the partitions.conf file:

  management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
  vlan1=0x1, ipoib, sl=0, defmember=full : ALL;
  vlan2=0x2, ipoib, sl=0, defmember=full : ALL;
  vlan3=0x3, ipoib, sl=0, defmember=full : ALL;
  vlan4=0x4, ipoib, sl=0, defmember=full : ALL;
  vlan5=0x5, ipoib, sl=0, defmember=full : ALL;
  vlan6=0x6, ipoib, sl=0, defmember=full : ALL;
  vlan7=0x7, ipoib, sl=0, defmember=full : ALL;
  vlan8=0x8, ipoib, sl=0, defmember=full : ALL;
  vlan9=0x9, ipoib, sl=0, defmember=full : ALL;
  vlan10=0xa, ipoib, sl=0, defmember=full : ALL;

Change the following in /etc/opensm/opensm.conf:

  allow_both_pkeys TRUE


Restart the openSM

  #/etc/init.d/opensmd restart

Neutron Server Node

SR-IOV

SR-IOV is a passthrough mode. Change the following in the file /etc/neutron/plugins/mlnx/mlnx_conf.ini

  tenant_network_type = ib
  vnic_type = hostdev
  network_vlan_ranges = default:1:10 
  (sql_connection and reconnect_interval  can be configured as described above)

The mapping between VLAN and PKEY is as follows: VLAN X = PKEY 0x8000 + X. For example: vlan 2 is pkey 0x8002

Para-Virtualized

On the controller, configure the file /etc/neutron/plugins/mlnx/mlnx_conf.ini

  tenant_network_type = ib
  vnic_type = bridge

Compute Nodes

SR-IOV

SR-IOV is a passthrough mode. Perform the following changes: In The file /etc/neutron/plugins/mlnx/mlnx_conf.ini

   physical_interface_mapping = default:autoib

Tenant_network_type , vnic_type and network_vlan_ranges parameters should be configured as the controller.

autoib can be replaced by the name of the PF.

Change the file /etc/eswitchd/eswitchd.conf

  fabrics  = default:autoib (or default:ib0)

eswitch should be started and then Neutron agent should be started

  /etc/init.d/eswitchd restart
  /etc/init.d/neutron-mlnx-agent restart

Verify Mellanox VIF driver is configured in /etc/nova/nova.conf

  libvirt_vif_driver=mlnxvif.vif.MlxEthVIFDriver

Restart nova

  #/etc/init.d/openstack-nova-compute restart


Para-Virtualized

Here we use the Linux Bridge plugin.

eIPoIB module should be up and configured.In /etc/infiniband/openib.conf:

E_IPOIB_LOAD=yes

And restart openibd:

#/etc/init.d/openibd restart

Please refer to eIPoIB configuration in Mellanox OFED User Manual

Once we have the eIPoIB interface, we use it in the linux bridge agent configuration:

For Example: Assuming eth1 is the eIPoIB interface:

To check that the interface type is eIPoIB run the command (verify that the driver is "eth_ipoib")

  #ethtool -i <interface> 
  driver: eth_ipoib
  version: 1.0.0
  firmware-version: 1 
  bus-info: ib0
  supports-statistics: yes
  supports-test: no
  supports-eeprom-access: no
  supports-register-dump: no
  supports-priv-flags: no

Change the file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini

  [linux_bridge] 
  physical_interface_mappings =  default:eth1   

Configure the Linux Bridge VIF Driver in /etc/nova/nova.conf

  #libvirt_vif_driver=nova.virt.libvirt.vif.NeutronLinuxBridgeVIFDriver

Restart nova

  #/etc/init.d/openstack-nova-compute restart

After confiuration Linux Bridge agent should be restarted.

  #/etc/init.d/neutron-linuxbridge-agent restart

Network Node

Configure the Linux bridge plugin as described in InfiniBand Para-Virtualized section (above) To make use of DHCP in Paravitualized mode (with Linux Bridge) you have to make sure 'bootp-broadcast-always' option is configured in the Instance dhclient.conf file.

DHCP Server

For DHCP support – The Network node should use the Mellanox Dnsmasq driver as the DHCP driver.

   # yum install mlnx-dnsmasq

Change the following in /etc/neutron/dhcp_agent.ini

  dhcp_driver = mlnx_dhcp.MlnxDnsmasq

Copy ipoibd and override /sbin/ipoibd

  #wget http://www.mellanox.com/downloads/solutions/openstack/havana/ipoibd
  #cp ipoibd /sbin/ipoibd

Download and apply a patch for Linux Bridge Agent:

  #wget http://www.mellanox.com/downloads/solutions/openstack/havana/patches/linux_bridge_agent.py.patch
  #cd /usr/lib/python2.6/site-packages/neutron/plugins/linuxbridge/agent
  #patch -p1 < linux_bridge_agent.py.patch 

Restart openibd

  #/etc/init.d/openibd restart

Start linux bridge agent

  /etc/init.d/neutron-linuxbridge-agent restart

Start dhcp server

  /etc/init.d/neutron-dhcp-agent restart

Usage Examples

Known issues and Troubleshooting

For known issues and troubleshooting options refer to Mellanox OpenStack Troubleshooting.

References

1. http://www.mellanox.com/openstack/

2. Source repository

3. Mellanox OFED

4. Mellanox OpenStack Solution Reference Architecture

5. Mellanox OpenStack Troubleshooting

For more details, please refer your question to openstack@mellanox.com

Return to Mellanox-OpenStack wiki page.