Mellanox-Neutron-Newton-Ubuntu-InfiniBand

=Overview=

Mellanox Neutron ML2 Driver
Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.

This driver supports Mellanox embedded switch functionality as part of the InfiniBand HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.

Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide. Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).

The driver supports VLAN network type to facilitate virtual networks either on InfiniBand fabrics.
 * Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.
 * Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.

Prerequisites

 * A running OpenStack environment installed with the ML2 plugin on top of OpenVswitch or Linux Bridge.
 * All nodes equipped with Mellanox ConnectX®-3/ConnectX®-3Pro Network Adapter
 * Mellanox OFED 2.4 or greater installed on all nodes.
 * SR-IOVenabled on all compute nodes.
 * iproute2 installed on all Compute nodes

=InfiniBand Network= The Mellanox Neutron Plugin use InfiniBand Partitions (PKeys) to separate Networks.

OpenSM Provisioning with SDN Mechanism Driver
SDN Mechanism Driver allows OpenSM dynamically assign PKs in the IB network.

More details about applying SDN Mechanism Driver with NEO can be found here

Manual OpenSM Configuration
All the PKeys should be predefined in the partitions.conf file (/etc/opensm/partitions.conf) (Automatic cofiguration is planned in future phase)

For ConnectX®-3/ConnectX®-3Pro use the following configuration
Add/Change the following in the partitions.conf file management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full; vlan1=0x1, ipoib, sl=0, defmember=full : ALL; vlan2=0x2, ipoib, sl=0, defmember=full : ALL; vlan3=0x3, ipoib, sl=0, defmember=full : ALL; vlan4=0x4, ipoib, sl=0, defmember=full : ALL; vlan5=0x5, ipoib, sl=0, defmember=full : ALL; vlan6=0x6, ipoib, sl=0, defmember=full : ALL; vlan7=0x7, ipoib, sl=0, defmember=full : ALL; vlan8=0x8, ipoib, sl=0, defmember=full : ALL; vlan9=0x9, ipoib, sl=0, defmember=full : ALL; vlan10=0xa, ipoib, sl=0, defmember=full : ALL; Change the following in /etc/opensm/opensm.conf: allow_both_pkeys TRUE

For ConnectX®-4 use the following configuration
Add/Change the following in the partitions.conf.user_ext vlan1=0x1, ipoib, sl=0, defmember=full: ALL_CAS; vlan2=0x2, ipoib, sl=0, defmember=full: ALL_CAS; vlan3=0x3, ipoib, sl=0, defmember=full: ALL_CAS; vlan4=0x4, ipoib, sl=0, defmember=full: SELF; vlan5=0x5, ipoib, sl=0, defmember=full: SELF; vlan6=0x6, ipoib, sl=0, defmember=full: SELF; vlan7=0x7, ipoib, sl=0, defmember=full: SELF; vlan8=0x8, ipoib, sl=0, defmember=full: SELF; vlan9=0x9, ipoib, sl=0, defmember=full: SELF; vlan10=0xa, ipoib, sl=0, defmember=full: SELF; Change the following in /etc/opensm/opensm.conf: virt_enabled 2 no_partition_enforcement TRUE part_enforce off allow_both_pkeys FALSE
 * 1) Storage and management vlan should be define as follow
 * 1) define OpenSM as a member for all OpenStack vlans. If not guest will have link down on “ibdev2netdev” and no connectivity.

5. Restart the OpenSM:
 * 1) service opensmd restart

Controller Node

 * Configure Debian OpenStack Newton repository
 * 1) sudo add-apt-repository cloud-archive:newton
 * 2) sudo apt-get update


 * Install prerequisites
 * 1) apt-get install -y python-ethtool python-zmq

# apt-get install -y networking-mlnx-eswitchd neutron-mlnx-agent python-networking-mlnx
 * Install Mellanox RPMs:


 * Run:
 * 1) systemctl enable neutron-mlnx-agent.service
 * 2) systemctl enable networking-mlnx-eswitchd.service
 * 3) service daemon-reload

core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
 * Make sure ML2 is the current Neutron plugin by checking the core_plugin parameter in /etc/neutron/neutron.conf:


 * Make sure /etc/neutron/plugin.ini is pointing to /etc/neutron/plugins/ml2/ml2_conf.ini (symbolic link)

[ml2] type_drivers = vlan,flat tenant_network_types = vlan
 * Modify /etc/neutron/plugins/ml2/ml2_conf.ini by adding the following:

mechanism_drivers = mlnx_infiniband, openvswitch mechanism_drivers = mlnx_infiniband, linuxbridge
 * 1) OVS Configuration
 * 1) LinuxBridge Configuration

[ml2_type_vlan] network_vlan_ranges = default:1:10
 * Start (or restart) the Neutron server:
 * 1) service neutron-server restart

Network Node
To configure the Network node:

Prerequisites: E_IPoIB port is configured and up

E_IPOIB_LOAD=yes
 * Make sure that eIPoIB module is up and configured in /etc/infiniband/openib.conf: For more information, please refer to eIPoIB configuration in Mellanox OFED User Manual.


 * Restart openibd:
 * 1) service openibd restart

[ovs] bridge_mappings = default:br- [linux_bridge] physical_interface_mappings = default:
 * Modify the network bridge configuration according to the use of OpenVswitch or LinuxBridge
 * OpenVswitch /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
 * LinuxBridge file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini:

NOTE: In order to obtain the eIPoIB interface name, run the ethtool tool command and check that driver name is eth_ipoib: driver: eth_ipoib .....
 * 1) ethtool -i 


 * Restart network bridge and neutron-dhcp-agent:
 * 1) service neutron-dhcp-agent restart
 * OpenVswitch
 * 1) service neutron-linuxbridge-agent restart
 * LinuxBridge
 * 1) service neutron-openvswitch-agent restart

NOTE: For DHCP support, the Network node should use the Mellanox Dnsmasq driver as the DHCP driver.

DHCP Server (Usually part of the Network node)
dhcp_driver = mlnx_dhcp.MlnxDnsmasq dhcp_broadcast_reply = True interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
 * Modify /etc/neutron/dhcp_agent.ini as follows and according to OVS or Linuxbridge:
 * 1) or interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver


 * Start DHCP server:
 * 1) service neutron-dhcp-agent restart

Compute Nodes
To configure the Compute Node:


 * Configure Debian OpenStack Newton repository
 * 1) sudo add-apt-repository cloud-archive:newton
 * 2) sudo apt-get update
 * Install prerequisites
 * 1) apt-get install -y python-ethtool python-zmq


 * Install Mellanox RPMs:
 * 1) apt-get install -y  networking-mlnx-eswitchd neutron-mlnx-agent python-networking-mlnx

Change the /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini
 * Edit the following file: /usr/lib/systemd/system/neutron-mlnx-agent.service


 * Run:
 * 1) systemctl enable networking-mlnx-eswitchd.service
 * 2) systemctl enable neutron-mlnx-agent.service


 * Create the file /etc/modprobe.d/mlx4_ib.conf and add the following:
 * 1) options mlx4_ib sm_guid_assign=0


 * Restart Nova:
 * 1) service openstack-nova-compute restart


 * Restart the driver:
 * 1) service opensmd restart
 * 2) /etc/init.d/openibd restart

physical_interface_mappings = default:(for example default:ib0)
 * In the file /etc/neutron/plugins/mlnx/mlnx_conf.ini, the parameters tenant_network_type, and network_vlan_ranges should be configured as the controllers:

fabrics = default: (for example default:ib0)
 * Modify the file /etc/eswitchd/eswitchd.conf as follows:


 * Restart services:
 * 1) systemctl enable networking-mlnx-eswitchd.service
 * 2) systemctl enable neutron-mlnx-agent.service
 * 3) systemctl restart networking-mlnx-eswitchd.service
 * 4) systemctl restart neutron-mlnx-agent.service

Known issues and Troubleshooting
For known issues and troubleshooting options refer to Mellanox OpenStack Troubleshooting