Mellanox-Neutron-Ocata-Redhat-InfiniBand

=Overview=

Mellanox Neutron ML2 Driver
Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.

This driver supports Mellanox embedded switch functionality as part of the InfiniBand HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.

Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide. Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).

The driver supports VLAN network type to facilitate virtual networks either on InfiniBand fabrics.
 * Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.
 * Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.

Prerequisites
yum -y install yum-plugin-priorities cd /etc/yum.repos.d/ sudo wget https://trunk.rdoproject.org/centos7-ocata/current/delorean.repo yum update -y
 * Red Hat Installation > 7.1
 * A running OpenStack environment installed with the ML2 plugin on top of OpenVswitch or Linux Bridge (RDO Manager or Packstack).
 * All nodes equipped with Mellanox ConnectX®-3 / ConnectX®-4 Network Adapter.
 * Mellanox OFED the latest 3.X installed on all nodes.
 * SR-IOV enabled on all compute nodes.
 * The software package iproute2 installed on all Compute nodes
 * Add the following repositories on each node

=InfiniBand Network= The Mellanox Neutron Plugin uses InfiniBand Partitions (PKeys) to separate Networks.

OpenSM Provisioning with SDN Mechanism Driver
SDN Mechanism Driver allows OpenSM dynamically assign PKs in the IB network.

More details about applying SDN Mechanism Driver with NEO can be found here

Manual OpenSM Configuration
All the PKeys should be predefined in the partitions.conf file (/etc/opensm/partitions.conf) (Automatic cofiguration is planned in future phase)

For ConnectX®-3/ConnectX®-3Pro use the following configuration
Add/Change the following in the partitions.conf file management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full; vlan1=0x1, ipoib, sl=0, defmember=full : ALL; vlan2=0x2, ipoib, sl=0, defmember=full : ALL; vlan3=0x3, ipoib, sl=0, defmember=full : ALL; vlan4=0x4, ipoib, sl=0, defmember=full : ALL; vlan5=0x5, ipoib, sl=0, defmember=full : ALL; vlan6=0x6, ipoib, sl=0, defmember=full : ALL; vlan7=0x7, ipoib, sl=0, defmember=full : ALL; vlan8=0x8, ipoib, sl=0, defmember=full : ALL; vlan9=0x9, ipoib, sl=0, defmember=full : ALL; vlan10=0xa, ipoib, sl=0, defmember=full : ALL; Change the following in /etc/opensm/opensm.conf: allow_both_pkeys TRUE

For ConnectX®-4 use the following configuration
Add/Change the following in the partitions.conf.user_ext vlan1=0x1, ipoib, sl=0, defmember=full: ALL_CAS; vlan2=0x2, ipoib, sl=0, defmember=full: ALL_CAS; vlan3=0x3, ipoib, sl=0, defmember=full: ALL_CAS; vlan4=0x4, ipoib, sl=0, defmember=full: SELF; vlan5=0x5, ipoib, sl=0, defmember=full: SELF; vlan6=0x6, ipoib, sl=0, defmember=full: SELF; vlan7=0x7, ipoib, sl=0, defmember=full: SELF; vlan8=0x8, ipoib, sl=0, defmember=full: SELF; vlan9=0x9, ipoib, sl=0, defmember=full: SELF; vlan10=0xa, ipoib, sl=0, defmember=full: SELF; Change the following in /etc/opensm/opensm.conf: virt_enabled 2 no_partition_enforcement TRUE part_enforce FALSE allow_both_pkeys FALSE
 * 1) Storage and management vlan should be define as follow
 * 1) define OpenSM as a member for all OpenStack vlans. If not guest will have link down on “ibdev2netdev” and no connectivity.

4. Restart the OpenSM:
 * 1) systemctl restart opensmd.service

Controller Node
To configure the Controller node:

1. Install Mellanox RPMs:
 * 1) yum install -y --nogpgcheck python-networking-mlnx

Neutron Server
1. Make sure ML2 is the current Neutron plugin by checking the core_plugin parameter in /etc/neutron/neutron.conf: core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin

2. Make sure /etc/neutron/plugin.ini is pointing to /etc/neutron/plugins/ml2/ml2_conf.ini (symbolic link)

3. Modify /etc/neutron/plugins/ml2/ml2_conf.ini by adding the following: [ml2] type_drivers = vlan,flat tenant_network_types = vlan mechanism_drivers = mlnx_infiniband, openvswitch [ml2_type_vlan] network_vlan_ranges = default:1:10
 * 1) or mechanism_drivers = mlnx_infiniband, linuxbridge

4. Start (or restart) the Neutron server:
 * 1) systemctl restart neutron-server.service

Nova Scheduler
Enabling PciPassthroughFilter modify /etc/nova/nova.conf scheduler_available_filters = nova.scheduler.filters.all_filters scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, PciPassthroughFilter

Network Node
To configure the Network node:

Prerequisites: E_IPoIB port is configured and up

1. Make sure that eIPoIB module is up and configured in /etc/infiniband/openib.conf: For more information, please refer to eIPoIB configuration in Mellanox OFED User Manual. E_IPOIB_LOAD=yes

2. Restart openibd:
 * 1) service openibd restart

3. Modify the network bridge configuration according to the use of OpenVswitch or LinuxBridge [ovs] bridge_mappings = default:br- [linux_bridge] physical_interface_mappings = default:
 * 3.1 OpenVswitch /etc/neutron/plugins/ml2/openvswitch_agent.ini
 * 3.2 LinuxBridge file /etc/neutron/plugins/ml2/linuxbridge_agent.ini:

NOTE: In order to obtain the eIPoIB interface name, run the ethtool tool command (see below) and check that driver name is eth_ipoib: driver: eth_ipoib .....
 * 1) ethtool -i 

4. Restart network bridge: systemctl restart neutron-openvswitch-agent.service
 * 4.1 OpenVswitch
 * 4.2 LinuxBridge (if using linux bridge)
 * 1) service neutron-linuxbridge-agent restart

NOTE: For DHCP support, the Network node should use the Mellanox Dnsmasq driver as the DHCP driver.

DHCP Server (Usually part of the Network node)
1. Modify /etc/neutron/dhcp_agent.ini as follows and according to OVS or Linuxbridge: dhcp_driver = networking_mlnx.dhcp.mlnx_dhcp.MlnxDnsmasq dhcp_broadcast_reply = True

1.1 For OVS interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver 1.2 For Linux Bridge interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

2. Restart DHCP server:
 * 1) systemctl restart neutron-dhcp-agent.service

Compute Nodes
To configure the Compute Node:

1. Install Mellanox RPMs:
 * 1) yum install --nogpgcheck -y python-networking-mlnx

2. Create the file /etc/modprobe.d/mlx4_ib.conf and add the following: options mlx4_ib sm_guid_assign=0

3. Restart the driver:
 * 1) /etc/init.d/openibd restart

Nova Compute
Nova-compute needs to know which PCI devices are allowed to be passed through to the VMs. Also for SRIOV PCI devices it needs to know to which physical network the VF belongs. This is done through the pci_passthrough_whitelist parameter under the default section in /etc/nova/nova.conf. For example if we want to whitelist and tag the VFs by their PCI address we would use the following setting: [pci] passthrough_whitelist = {"address":"*:0a:00.*","physical_network":"default"} This associates any VF with address that includes ':0a:00.' in its address to the physical network default.

1. add pci passthrough_whitelist to /etc/nova/nova.conf

2. Restart Nova:
 * 1) systemctl restart openstack-nova-compute

Neutron MLNX Agent
1. Run:
 * 1) systemctl enable neutron-mlnx-agent.service
 * 2) systemctl start neutron-mlnx-agent.service

2. Run:
 * 1) systemctl daemon-reload

3. In the file /etc/neutron/plugins/mlnx/mlnx_conf.ini, the parameters tenant_network_type, and network_vlan_ranges should be configured as the controllers: physical_interface_mappings = default:(for example default:ib0)

4. Modify the file /etc/neutron/plugins/ml2/eswitchd.conf as follows: fabrics = default: (for example default:ib0)

5. Start eSwitchd:
 * 1) systemctl enable eswitchd.service
 * 2) systemctl start  eswitchd.service

6. Start the Neutron agent:
 * 1) systemctl restart neutron-mlnx-agent

Known issues and Troubleshooting
For known issues and troubleshooting options refer to Mellanox OpenStack Troubleshooting

Issue: Missing zmq package on all nodes (Controller/Compute) Solution:
 * 1) wget https://bootstrap.pypa.io/get-pip.py
 * 2) sudo python get-pip.py
 * 3) sudo pip install pyzmq