Jump to: navigation, search

Difference between revisions of "Mellanox-Neutron-Queens-Ubuntu-InfiniBand"

m
 
Line 2: Line 2:
  
 
==Mellanox Neutron ML2 Driver==
 
==Mellanox Neutron ML2 Driver==
 
  
 
Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.  
 
Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.  
  
This driver supports Mellanox embedded switch functionality as part of the InfiniBand HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.  
+
This driver supports Mellanaox embedded switch functionality as part of the InfiniBand HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.  
  
 
Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide.
 
Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide.
Line 16: Line 15:
  
 
===Prerequisites===
 
===Prerequisites===
* Red Hat Installation > 7.1
+
* A running OpenStack environment installed with the ML2 plugin on top of OpenVswitch or Linux Bridge.
* Mellanox [http://www.mellanox.com/page/products_dyn?product_family=119  ConnectX®-3/Mellanox ConnectX®-3 PRO] with [http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers Mellanox OFED] the latest 3.X
+
* Mellanox [http://www.mellanox.com/page/products_dyn?product_family=119  ConnectX®-3/Mellanox ConnectX®-3 PRO] with [http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers Mellanox OFED] 2.4 or greater
* Mellanox [http://www.mellanox.com/page/products_dyn?product_family=201& ConnectX®-4]/[http://www.mellanox.com/page/products_dyn?product_family=214&mtag=connectx_4_lx_en_ic ConnectX®-4Lx]/[http://www.mellanox.com/page/products_dyn?product_family=258&mtag=connectx_5_vpi_card ConnectX®-5] with [http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers MLNX_OFED_LINUX] the latest 3.X .
+
* Mellanox [http://www.mellanox.com/page/products_dyn?product_family=201& ConnectX®-4]/[http://www.mellanox.com/page/products_dyn?product_family=214&mtag=connectx_4_lx_en_ic ConnectX®-4Lx]/[http://www.mellanox.com/page/products_dyn?product_family=258&mtag=connectx_5_vpi_card ConnectX®-5] with [http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers MLNX_OFED_LINUX] 3.1-1.5.7 or greater.
* A running OpenStack environment installed with the ML2 plugin on top of OpenVswitch or Linux Bridge (RDO Manager or Packstack).
+
* [https://community.mellanox.com/docs/DOC-1317 SR-IOV ]enabled on all compute nodes.  
* [https://community.mellanox.com/docs/DOC-1317 SR-IOV] enabled on all compute nodes.
+
* [http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 iproute2] installed on all Compute nodes
* The software package [http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 iproute2] installed on all Compute nodes
 
 
* [http://www.mellanox.com/page/products_dyn?product_family=100 Mellanox UFM] greater than 5.9.5
 
* [http://www.mellanox.com/page/products_dyn?product_family=100 Mellanox UFM] greater than 5.9.5
* Add the following repositories on each node
+
 
yum -y install yum-plugin-priorities
 
cd /etc/yum.repos.d/
 
sudo wget https://trunk.rdoproject.org/centos7-queens/current/delorean.repo
 
yum update -y
 
  
 
=InfiniBand Network=
 
=InfiniBand Network=
The Mellanox Neutron Plugin uses InfiniBand Partitions (PKeys) to separate Networks.  
+
The Mellanox Neutron Plugin use InfiniBand Partitions (PKeys) to separate Networks.  
  
 
==SM Node==
 
==SM Node==
Line 62: Line 56:
 
   allow_both_pkeys TRUE
 
   allow_both_pkeys TRUE
  
====For ConnectX®-4/ConnectX®-5 use the following configuration====
+
====For ConnectX®-4 use the following configuration====
  
 
Add/Change the following in the partitions.conf.user_ext
 
Add/Change the following in the partitions.conf.user_ext
Line 80: Line 74:
 
Change the following in /etc/opensm/opensm.conf:
 
Change the following in /etc/opensm/opensm.conf:
 
  virt_enabled 2
 
  virt_enabled 2
 +
no_partition_enforcement TRUE
 
  part_enforce off
 
  part_enforce off
 
  allow_both_pkeys FALSE
 
  allow_both_pkeys FALSE
  
4. Restart the OpenSM:  
+
  # systemctl restart opensmd.service
+
5. Restart the OpenSM:  
 +
  # service opensmd restart
  
 
==Controller Node==
 
==Controller Node==
  
 
To configure the Controller node:
 
To configure the Controller node:
 +
* Configure Debian OpenStack queens repository
 +
sudo add-apt-repository cloud-archive:queens
 +
sudo apt-get update
 +
 +
* Install prerequisites
 +
# apt-get install -y python-ethtool python-zmq
  
1. Install Mellanox RPMs:  
+
* Install Mellanox RPMs:  
  # yum install -y --nogpgcheck python-networking-mlnx  
+
  # apt-get install -y openstack-neutron-mellanox python-networking-mlnx
  
===Neutron Server===
+
 
1. Make sure ML2 is the current Neutron plugin by checking the core_plugin parameter in '''/etc/neutron/neutron.conf''':  
+
* Run:
 +
# service enable neutron-mlnx-agent networking-mlnx-eswitchd.service
 +
# service daemon-reload
 +
 
 +
* Make sure ML2 is the current Neutron plugin by checking the core_plugin parameter in '''/etc/neutron/neutron.conf''':  
 
  core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
 
  core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
  
2. Make sure '''/etc/neutron/plugin.ini''' is pointing to '''/etc/neutron/plugins/ml2/ml2_conf.ini''' (symbolic link)  
+
* Make sure '''/etc/neutron/plugin.ini''' is pointing to '''/etc/neutron/plugins/ml2/ml2_conf.ini''' (symbolic link)  
  
3. Modify '''/etc/neutron/plugins/ml2/ml2_conf.ini''' by adding the following:  
+
* Modify /etc/neutron/plugins/ml2/ml2_conf.ini by adding the following:  
 
  [ml2]
 
  [ml2]
 
  type_drivers = vlan,flat
 
  type_drivers = vlan,flat
 
  tenant_network_types = vlan
 
  tenant_network_types = vlan
 +
 +
# OVS Configuration
 
  mechanism_drivers = mlnx_infiniband, openvswitch
 
  mechanism_drivers = mlnx_infiniband, openvswitch
  # or mechanism_drivers = mlnx_infiniband, linuxbridge
+
  # LinuxBridge Configuration
 +
mechanism_drivers = mlnx_infiniband, linuxbridge
 +
 
 
  [ml2_type_vlan]
 
  [ml2_type_vlan]
 
  network_vlan_ranges = default:1:10
 
  network_vlan_ranges = default:1:10
 
+
4. Start (or restart) the Neutron server:  
+
* Start (or restart) the Neutron server:  
  # systemctl restart neutron-server.service
+
  # service neutron-server restart
 
 
===Nova Scheduler===
 
Enabling PciPassthroughFilter modify /etc/nova/nova.conf
 
  scheduler_available_filters = nova.scheduler.filters.all_filters
 
  scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, PciPassthroughFilter
 
  
 
== Network Node==
 
== Network Node==
Line 122: Line 127:
 
  E_IPoIB port is configured and up
 
  E_IPoIB port is configured and up
  
1. Make sure that eIPoIB module is up and configured in '''/etc/infiniband/openib.conf''': For more information, please refer to eIPoIB configuration in Mellanox OFED User Manual.  
+
* Make sure that eIPoIB module is up and configured in '''/etc/infiniband/openib.conf''': For more information, please refer to eIPoIB configuration in Mellanox OFED User Manual.  
 
  E_IPOIB_LOAD=yes
 
  E_IPOIB_LOAD=yes
  
 
+
* Restart openibd:  
2. Restart openibd:  
 
 
  # service openibd restart
 
  # service openibd restart
  
3. Modify the network bridge configuration according to the use of OpenVswitch or LinuxBridge
+
* Modify the network bridge configuration according to the use of OpenVswitch or LinuxBridge
* 3.1 OpenVswitch '''/etc/neutron/plugins/ml2/openvswitch_agent.ini'''  
+
*OpenVswitch '''/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini'''  
 
  [ovs]
 
  [ovs]
 
  bridge_mappings = default:br-<eIPoIB interface>
 
  bridge_mappings = default:br-<eIPoIB interface>
* 3.2 LinuxBridge file '''/etc/neutron/plugins/ml2/linuxbridge_agent.ini''':
+
** LinuxBridge file '''/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini''':
 
  [linux_bridge]  
 
  [linux_bridge]  
 
  physical_interface_mappings = default:<eIPoIB interface>
 
  physical_interface_mappings = default:<eIPoIB interface>
  
 
+
NOTE: In order to obtain the eIPoIB interface name, run the ethtool tool command and check that driver name is eth_ipoib:  
NOTE: In order to obtain the eIPoIB interface name, run the ethtool tool command (see below) and check that driver name is eth_ipoib:  
 
 
  # ethtool -i <eIPoIB_interface>  
 
  # ethtool -i <eIPoIB_interface>  
 
  driver: eth_ipoib
 
  driver: eth_ipoib
 
  .....
 
  .....
  
 
+
* Restart network bridge and neutron-dhcp-agent:
4. Restart network bridge:  
+
  # service neutron-dhcp-agent restart
* 4.1 OpenVswitch
+
** OpenVswitch
  systemctl restart neutron-openvswitch-agent.service
 
* 4.2 LinuxBridge (if using linux bridge)
 
 
  # service neutron-linuxbridge-agent restart
 
  # service neutron-linuxbridge-agent restart
 
+
** LinuxBridge
 +
# service neutron-openvswitch-agent restart
  
 
NOTE: For DHCP support, the Network node should use the Mellanox Dnsmasq driver as the DHCP driver.  
 
NOTE: For DHCP support, the Network node should use the Mellanox Dnsmasq driver as the DHCP driver.  
Line 155: Line 157:
 
===DHCP Server (Usually part of the Network node)===
 
===DHCP Server (Usually part of the Network node)===
  
1. Modify '''/etc/neutron/dhcp_agent.ini''' as follows and according to OVS or Linuxbridge:  
+
* Modify '''/etc/neutron/dhcp_agent.ini''' as follows and according to OVS or Linuxbridge:  
  dhcp_driver = networking_mlnx.dhcp.mlnx_dhcp.MlnxDnsmasq
+
  dhcp_driver = mlnx_dhcp.MlnxDnsmasq
 
  dhcp_broadcast_reply = True
 
  dhcp_broadcast_reply = True
 +
# or interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
 +
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
  
1.1 For OVS
+
* Start DHCP server:
    interface_driver =  neutron.agent.linux.interface.OVSInterfaceDriver
+
  # service neutron-dhcp-agent restart
1.2 For Linux Bridge
 
      interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
 
 
 
  
2. Restart DHCP server:
 
# systemctl restart neutron-dhcp-agent.service
 
  
 
==Compute Nodes==
 
==Compute Nodes==
Line 172: Line 171:
 
To configure the Compute Node:  
 
To configure the Compute Node:  
  
1. Install Mellanox RPMs:
+
* Configure Mellanox OpenStack ocata repository
# yum install --nogpgcheck -y python-networking-mlnx
+
# sudo add-apt-repository http://www.mellanox.com/repository/solutions/openstack/ocata/ubuntu
 +
 
  
2. Create the file '''/etc/modprobe.d/mlx4_ib.conf''' and add the following:
+
*  Install Mellanox RPMs:  
  options mlx4_ib sm_guid_assign=0
+
  # apt-get install -y openstack-neutron-mellanox eswitchd mlnx-python-networking-mlnx python-ethtool python-zmq
  
3. Restart the driver:
 
# /etc/init.d/openibd restart
 
  
===Nova Compute===
+
* Edit the following file: '''/usr/lib/systemd/system/neutron-mlnx-agent.service'''
Nova-compute needs to know which PCI devices are allowed to be passed through to the VMs.
+
Change the /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini
Also for SRIOV PCI devices it needs to know to which physical network the VF belongs.
 
This is done through the pci_passthrough_whitelist parameter under the default section in '''/etc/nova/nova.conf'''.
 
For example if we want to whitelist and tag the VFs by their PCI address we would use the following setting:
 
[pci]
 
passthrough_whitelist = {"address":"*:0a:00.*","physical_network":"default"}
 
This associates any VF with address that includes ':0a:00.' in its address to the physical network default.
 
  
1. add pci passthrough_whitelist to '''/etc/nova/nova.conf'''
+
* Run:
 +
  # service neutron-mlnx-agent start
  
2. Restart Nova:
+
* Run:  
  # systemctl restart openstack-nova-compute
+
  # service daemon-reload start
  
===Neutron MLNX Agent===
+
* Create the file '''/etc/modprobe.d/mlx4_ib.conf''' and add the following:
1. Run:  
+
  # options mlx4_ib sm_guid_assign=0
  # systemctl enable neutron-mlnx-agent.service
 
# systemctl start neutron-mlnx-agent.service
 
  
2. Run:  
+
* Restart Nova:  
  # systemctl daemon-reload
+
  # service openstack-nova-compute restart
  
3. In the file '''/etc/neutron/plugins/mlnx/mlnx_conf.ini''', the parameters tenant_network_type , and network_vlan_ranges should be configured as the controllers:  
+
* Restart the driver:
 +
# service opensmd restart
 +
# /etc/init.d/openibd restart
 +
 
 +
* In the file '''/etc/neutron/plugins/mlnx/mlnx_conf.ini''', the parameters tenant_network_type , and network_vlan_ranges should be configured as the controllers:  
 
  physical_interface_mappings = default:<ib_interface>(for example default:ib0)
 
  physical_interface_mappings = default:<ib_interface>(for example default:ib0)
  
4. Modify the file '''/etc/neutron/plugins/ml2/eswitchd.conf''' as follows:
+
* Modify the file '''/etc/eswitchd/eswitchd.conf''' as follows:
 
  fabrics = default:<ib_interface> (for example default:ib0)
 
  fabrics = default:<ib_interface> (for example default:ib0)
  
5. Start eSwitchd:  
+
* Start eSwitchd:  
  # systemctl enable eswitchd.service
+
  # service eswitchd restart
# systemctl start  eswitchd.service
 
  
6. Start the Neutron agent:  
+
* Start the Neutron agent:  
  # systemctl restart neutron-mlnx-agent
+
  # service neutron-mlnx-agent restart
  
 
==Known issues and Troubleshooting==
 
==Known issues and Troubleshooting==
 
For known issues and troubleshooting options refer to [https://community.mellanox.com/docs/DOC-1127 Mellanox OpenStack Troubleshooting]
 
For known issues and troubleshooting options refer to [https://community.mellanox.com/docs/DOC-1127 Mellanox OpenStack Troubleshooting]
 
Issue: Missing zmq package on all nodes (Controller/Compute)
 
Solution:
 
<pre>
 
# wget https://bootstrap.pypa.io/get-pip.py
 
# sudo python get-pip.py
 
# sudo pip install pyzmq
 
</pre>
 

Latest revision as of 11:40, 17 February 2019

Overview

Mellanox Neutron ML2 Driver

Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.

This driver supports Mellanaox embedded switch functionality as part of the InfiniBand HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.

Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide. Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).

The driver supports VLAN network type to facilitate virtual networks either on InfiniBand fabrics.

  • Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.
  • Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.

Prerequisites


InfiniBand Network

The Mellanox Neutron Plugin use InfiniBand Partitions (PKeys) to separate Networks.

SM Node

OpenSM Provisioning with SDN Mechanism Driver

SDN Mechanism Driver allows OpenSM dynamically assign PKs in the IB network.

More details about applying SDN Mechanism Driver with NEO can be found here

Manual OpenSM Configuration

All the PKeys should be predefined in the partitions.conf file (/etc/opensm/partitions.conf) (Automatic cofiguration is planned in future phase)

For ConnectX®-3/ConnectX®-3Pro use the following configuration

Add/Change the following in the partitions.conf file

  management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
  vlan1=0x1, ipoib, sl=0, defmember=full : ALL;
  vlan2=0x2, ipoib, sl=0, defmember=full : ALL;
  vlan3=0x3, ipoib, sl=0, defmember=full : ALL;
  vlan4=0x4, ipoib, sl=0, defmember=full : ALL;
  vlan5=0x5, ipoib, sl=0, defmember=full : ALL;
  vlan6=0x6, ipoib, sl=0, defmember=full : ALL;
  vlan7=0x7, ipoib, sl=0, defmember=full : ALL;
  vlan8=0x8, ipoib, sl=0, defmember=full : ALL;
  vlan9=0x9, ipoib, sl=0, defmember=full : ALL;
  vlan10=0xa, ipoib, sl=0, defmember=full : ALL;

Change the following in /etc/opensm/opensm.conf:

  allow_both_pkeys TRUE

For ConnectX®-4 use the following configuration

Add/Change the following in the partitions.conf.user_ext

vlan1=0x1, ipoib, sl=0, defmember=full: ALL_CAS;
#Storage and management vlan should be define as follow
vlan2=0x2, ipoib, sl=0, defmember=full: ALL_CAS;
vlan3=0x3, ipoib, sl=0, defmember=full: ALL_CAS;
#define OpenSM as a member for all OpenStack vlans. If not guest will have link down on “ibdev2netdev” and no connectivity.
vlan4=0x4, ipoib, sl=0, defmember=full: SELF;
vlan5=0x5, ipoib, sl=0, defmember=full: SELF;
vlan6=0x6, ipoib, sl=0, defmember=full: SELF;
vlan7=0x7, ipoib, sl=0, defmember=full: SELF;
vlan8=0x8, ipoib, sl=0, defmember=full: SELF;
vlan9=0x9, ipoib, sl=0, defmember=full: SELF;
vlan10=0xa, ipoib, sl=0, defmember=full: SELF;

Change the following in /etc/opensm/opensm.conf:

virt_enabled 2
no_partition_enforcement TRUE
part_enforce off
allow_both_pkeys FALSE


5. Restart the OpenSM:

# service opensmd restart

Controller Node

To configure the Controller node:

  • Configure Debian OpenStack queens repository

sudo add-apt-repository cloud-archive:queens sudo apt-get update

  • Install prerequisites
# apt-get install -y python-ethtool python-zmq
  • Install Mellanox RPMs:
# apt-get install -y openstack-neutron-mellanox python-networking-mlnx


  • Run:
# service enable neutron-mlnx-agent networking-mlnx-eswitchd.service
# service daemon-reload
  • Make sure ML2 is the current Neutron plugin by checking the core_plugin parameter in /etc/neutron/neutron.conf:
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
  • Make sure /etc/neutron/plugin.ini is pointing to /etc/neutron/plugins/ml2/ml2_conf.ini (symbolic link)
  • Modify /etc/neutron/plugins/ml2/ml2_conf.ini by adding the following:
[ml2]
type_drivers = vlan,flat
tenant_network_types = vlan
# OVS Configuration
mechanism_drivers = mlnx_infiniband, openvswitch
# LinuxBridge Configuration
mechanism_drivers = mlnx_infiniband, linuxbridge
[ml2_type_vlan]
network_vlan_ranges = default:1:10

  • Start (or restart) the Neutron server:
# service neutron-server restart

Network Node

To configure the Network node:

Prerequisites:

E_IPoIB port is configured and up
  • Make sure that eIPoIB module is up and configured in /etc/infiniband/openib.conf: For more information, please refer to eIPoIB configuration in Mellanox OFED User Manual.
E_IPOIB_LOAD=yes
  • Restart openibd:
# service openibd restart
  • Modify the network bridge configuration according to the use of OpenVswitch or LinuxBridge
    • OpenVswitch /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings = default:br-<eIPoIB interface>
    • LinuxBridge file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini:
[linux_bridge] 
physical_interface_mappings = default:<eIPoIB interface>

NOTE: In order to obtain the eIPoIB interface name, run the ethtool tool command and check that driver name is eth_ipoib:

# ethtool -i <eIPoIB_interface> 
driver: eth_ipoib
.....
  • Restart network bridge and neutron-dhcp-agent:
# service neutron-dhcp-agent restart
    • OpenVswitch
# service neutron-linuxbridge-agent restart
    • LinuxBridge
# service neutron-openvswitch-agent restart

NOTE: For DHCP support, the Network node should use the Mellanox Dnsmasq driver as the DHCP driver.

DHCP Server (Usually part of the Network node)

  • Modify /etc/neutron/dhcp_agent.ini as follows and according to OVS or Linuxbridge:
dhcp_driver = mlnx_dhcp.MlnxDnsmasq
dhcp_broadcast_reply = True
# or interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
  • Start DHCP server:
# service neutron-dhcp-agent restart


Compute Nodes

To configure the Compute Node:

  • Configure Mellanox OpenStack ocata repository
  1. sudo add-apt-repository http://www.mellanox.com/repository/solutions/openstack/ocata/ubuntu


  • Install Mellanox RPMs:
# apt-get install -y openstack-neutron-mellanox eswitchd mlnx-python-networking-mlnx python-ethtool python-zmq 


  • Edit the following file: /usr/lib/systemd/system/neutron-mlnx-agent.service

Change the /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini

  • Run:
# service neutron-mlnx-agent start
  • Run:
# service daemon-reload start
  • Create the file /etc/modprobe.d/mlx4_ib.conf and add the following:
# options mlx4_ib sm_guid_assign=0
  • Restart Nova:
# service openstack-nova-compute restart 
  • Restart the driver:
# service opensmd restart
# /etc/init.d/openibd restart
  • In the file /etc/neutron/plugins/mlnx/mlnx_conf.ini, the parameters tenant_network_type , and network_vlan_ranges should be configured as the controllers:
physical_interface_mappings = default:<ib_interface>(for example default:ib0)
  • Modify the file /etc/eswitchd/eswitchd.conf as follows:
fabrics = default:<ib_interface> (for example default:ib0)
  • Start eSwitchd:
# service eswitchd restart
  • Start the Neutron agent:
# service neutron-mlnx-agent restart

Known issues and Troubleshooting

For known issues and troubleshooting options refer to Mellanox OpenStack Troubleshooting