Jump to: navigation, search

Difference between revisions of "Mellanox-Neutron-Juno-Ubuntu-InfiniBand"

Line 8: Line 8:
 
This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.  
 
This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.  
  
Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide (click here). Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).  
+
Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide.
 +
Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).  
  
 
The driver supports VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.  
 
The driver supports VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.  
Line 24: Line 25:
 
* The software package iproute2 - (http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 ) installed on all Compute nodes
 
* The software package iproute2 - (http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 ) installed on all Compute nodes
 
* Add repository  
 
* Add repository  
<pre> yum-config-manager --add-repo http://www.mellanox.com/repository/solutions/openstack/juno/rhel7/ </pre>
+
sudo add-apt-repository http://www.mellanox.com/repository/solutions/openstack/juno/ubuntu/
  
 
=InfiniBand Network=
 
=InfiniBand Network=
Line 35: Line 36:
 
To configure OpenSM:  
 
To configure OpenSM:  
  
1. Make sure that all the PKeys are predefined in the partitions.conf file (/etc/opensm/partitions.conf).
+
1. Make sure that all the PKeys are predefined in the '''/etc/opensm/partitions.conf'''.
 
 
2. Add/Change the following in the partitions.conf file:
 
 
  management=0xffff,ipoib, sl=0, defmember=full: ALL, ALL_SWITCHES=full,SELF=full;
 
  management=0xffff,ipoib, sl=0, defmember=full: ALL, ALL_SWITCHES=full,SELF=full;
  
3. For every network you want to configure in Neutron, you have to configure the PKey associated with the VLAN of this network (defined in Neutron): vlan1=0x1, ipoib, sl=0, defmember=full : ALL;
+
2. For every network you want to configure in Neutron, you have to configure the PKey associated with the VLAN of this network (defined in Neutron): vlan1=0x1, ipoib, sl=0, defmember=full : ALL;
  
 
Below is an example of the configuration of the partitions.conf file in case you have 10 VLANs defined in the configuration of the file:
 
Below is an example of the configuration of the partitions.conf file in case you have 10 VLANs defined in the configuration of the file:
/etc/neutron/plugins/mlnx/mlnx_conf.ini. ( network_vlan_ranges = physnet1:1:10)  
+
'''/etc/neutron/plugins/mlnx/mlnx_conf.ini'''. ( network_vlan_ranges = physnet1:1:10)  
<pre>
+
management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
+
vlan1=0x1, ipoib, sl=0, defmember=full: ALL_CAS;
vlan1=0x1, ipoib, sl=0, defmember=full: ALL_CAS;
+
vlan2=0x2, ipoib, sl=0, defmember=full: ALL_CAS;
vlan2=0x2, ipoib, sl=0, defmember=full: ALL_CAS;
+
vlan3=0x3, ipoib, sl=0, defmember=full: ALL_CAS;
vlan3=0x3, ipoib, sl=0, defmember=full: ALL_CAS;
+
vlan4=0x4, ipoib, sl=0, defmember=full: ALL_CAS;
vlan4=0x4, ipoib, sl=0, defmember=full: ALL_CAS;
+
vlan5=0x5, ipoib, sl=0, defmember=full: ALL_CAS;
vlan5=0x5, ipoib, sl=0, defmember=full: ALL_CAS;
+
vlan6=0x6, ipoib, sl=0, defmember=full: ALL_CAS;
vlan6=0x6, ipoib, sl=0, defmember=full: ALL_CAS;
+
vlan7=0x7, ipoib, sl=0, defmember=full: ALL_CAS;
vlan7=0x7, ipoib, sl=0, defmember=full: ALL_CAS;
+
vlan8=0x8, ipoib, sl=0, defmember=full: ALL_CAS;
vlan8=0x8, ipoib, sl=0, defmember=full: ALL_CAS;
+
vlan9=0x9, ipoib, sl=0, defmember=full: ALL_CAS;
vlan9=0x9, ipoib, sl=0, defmember=full: ALL_CAS;
+
vlan10=0xa, ipoib, sl=0, defmember=full: ALL_CAS;
vlan10=0xa, ipoib, sl=0, defmember=full: ALL_CAS;
 
</pre>
 
  
4. Modify the following line in the file /etc/opensm/opensm.conf from FALSE to TRUE:  
+
3. Modify the following line in the file '''/etc/opensm/opensm.conf''' from FALSE to TRUE:  
<pre>
+
allow_both_pkeys TRUE
allow_both_pkeys TRUE
 
</pre>
 
  
 
5. Restart the OpenSM:  
 
5. Restart the OpenSM:  
<pre>
+
# /etc/init.d/opensmd restart
#/etc/init.d/opensmd restart
 
</pre>
 
  
 
==Controller Node==
 
==Controller Node==
  
 
2. Install Mellanox RPMs:  
 
2. Install Mellanox RPMs:  
<pre>
+
#apt-get install -y openstack-neutron-mellanox eswitchd mlnx-dnsmasq
modify gpgcheck=0 in /etc/yum.conf
 
#yum install -y openstack-neutron-mellanox eswitchd mlnx-dnsmasq
 
</pre>
 
  
 
3. Edit the following file: '''/usr/lib/systemd/system/neutron-mlnx-agent.service'''
 
3. Edit the following file: '''/usr/lib/systemd/system/neutron-mlnx-agent.service'''
<pre>
+
Change the /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini
Change the /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini
 
</pre>
 
  
 
4. Run:  
 
4. Run:  
<pre>
+
# /etc/init.d/neutron-plugin-mlnx-agent start  
#/etc/init.d/neutron-plugin-mlnx-agent start  
 
</pre>
 
  
 
5. Run:  
 
5. Run:  
<pre>
+
  # service daemon-reload
  #service daemon-reload
 
</pre>
 
  
 
6. Make sure ML2 is the current Neutron plugin by checking the core_plugin parameter in '''/etc/neutron/neutron.conf''':  
 
6. Make sure ML2 is the current Neutron plugin by checking the core_plugin parameter in '''/etc/neutron/neutron.conf''':  
<pre>
+
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
 
</pre>
 
  
 
7. Make sure /etc/neutron/plugin.ini is pointing to '''/etc/neutron/plugins/ml2/ml2_conf.ini''' (symbolic link)  
 
7. Make sure /etc/neutron/plugin.ini is pointing to '''/etc/neutron/plugins/ml2/ml2_conf.ini''' (symbolic link)  
  
 
8. Modify /etc/neutron/plugins/ml2/ml2_conf.ini by adding the following:  
 
8. Modify /etc/neutron/plugins/ml2/ml2_conf.ini by adding the following:  
<pre>
+
[ml2]
[ml2]
+
type_drivers = vlan,flat
type_drivers = vlan,flat
+
tenant_network_types = vlan
tenant_network_types = vlan
+
mechanism_drivers = mlnx, openvswitch
mechanism_drivers = mlnx, openvswitch
+
# or mechanism_drivers = mlnx, linuxbridge
# or mechanism_drivers = mlnx, linuxbridge
+
[ml2_type_vlan]
[ml2_type_vlan]
+
network_vlan_ranges = physnet1:1:10
network_vlan_ranges = physnet1:1:10
+
[eswitch]
[eswitch]
+
# (StrOpt) Type of Network Interface to allocate for VM:
# (StrOpt) Type of Network Interface to allocate for VM:
+
# mlnx_direct or hostdev according to libvirt terminology
# mlnx_direct or hostdev according to libvirt terminology
+
vnic_type = hostdev
vnic_type = hostdev
+
 
</pre>
 
 
9. Start (or restart) the Neutron server:  
 
9. Start (or restart) the Neutron server:  
<pre>
+
# /etc/init.d/neutron-server restart
#/etc/init.d/neutron-server restart
 
</pre>
 
  
 
== Network Node==
 
== Network Node==
Line 121: Line 100:
  
 
Prerequisites:
 
Prerequisites:
* E_IPoIB port is configured and up
+
E_IPoIB port is configured and up
+
 
 
1. Make sure that eIPoIB module is up and configured in '''/etc/infiniband/openib.conf''': For more information, please refer to eIPoIB configuration in Mellanox OFED User Manual.  
 
1. Make sure that eIPoIB module is up and configured in '''/etc/infiniband/openib.conf''': For more information, please refer to eIPoIB configuration in Mellanox OFED User Manual.  
 
  E_IPOIB_LOAD=yes
 
  E_IPOIB_LOAD=yes
 +
 +
 
2. Restart openibd:  
 
2. Restart openibd:  
<pre>
+
# service openibd restart
#service openibd restart
+
 
</pre>
+
 
3. Modify the file '''/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini''' as follows:
+
3. Modify the network bridge configuration according to the use of OpenVswitch or LinuxBridge
<pre>
+
* 3.1 OpenVswitch '''/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini'''  
[linux_bridge]  
+
[ovs]
physical_interface_mappings = physnet1:<eIPoIB interface>
+
bridge_mappings = physnet1:br-<eIPoIB interface>
</pre>
+
* 3.2 LinuxBridge file '''/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini''':
if you are using openvswitch than modify '''/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini'''  
+
[linux_bridge]  
<pre>
+
physical_interface_mappings = physnet1:<eIPoIB interface>
[ovs]
+
 
bridge_mappings = physnet1:br-<eIPoIB interface>
 
</pre>
 
  
 
NOTE: In order to obtain the eIPoIB interface name, run the ethtool tool command (see below) and check that driver name is eth_ipoib:  
 
NOTE: In order to obtain the eIPoIB interface name, run the ethtool tool command (see below) and check that driver name is eth_ipoib:  
  #ethtool -i <eIPoIB_interface>  
+
  # ethtool -i <eIPoIB_interface>  
 
  driver: eth_ipoib
 
  driver: eth_ipoib
  version: 1.0.0
+
  .....
firmware-version: 1
+
 
bus-info: ib0
 
supports-statistics: yes
 
supports-test: no
 
supports-eeprom-access: no
 
supports-register-dump: no
 
supports-priv-flags: no
 
  
 
4. Restart neutron-linuxbridge-agent and neutron-dhcp-agent:  
 
4. Restart neutron-linuxbridge-agent and neutron-dhcp-agent:  
  #service neutron-linuxbridge-agent.service restart
+
  # service neutron-linuxbridge-agent restart
  #service neutron-openvswitch-agent.service restart
+
  # service neutron-openvswitch-agent restart
  
 
NOTE: For DHCP support, the Network node should use the Mellanox Dnsmasq driver as the DHCP driver.  
 
NOTE: For DHCP support, the Network node should use the Mellanox Dnsmasq driver as the DHCP driver.  
 +
 
===DHCP Server (Usually part of the Network node)===
 
===DHCP Server (Usually part of the Network node)===
 +
 
1. Modify '''/etc/neutron/dhcp_agent.ini''' as follows and according to OVS or Linuxbridge:  
 
1. Modify '''/etc/neutron/dhcp_agent.ini''' as follows and according to OVS or Linuxbridge:  
<pre>
+
dhcp_driver = mlnx_dhcp.MlnxDnsmasq
dhcp_driver = mlnx_dhcp.MlnxDnsmasq
+
# or interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
#or interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
+
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
+
 
</pre>
+
 
 
2. Start DHCP server:  
 
2. Start DHCP server:  
<pre>
+
# service neutron-dhcp-agent restart
#service neutron-dhcp-agent.service restart
 
</pre>
 
  
 
==Compute Nodes==
 
==Compute Nodes==
Line 173: Line 146:
 
To configure the Compute Node:  
 
To configure the Compute Node:  
  
1.Configure yum with Mellanox OpenStack Juno repository
+
1.Configure Mellanox OpenStack Juno repository
<pre>
+
# sudo add-apt-repository http://www.mellanox.com/repository/solutions/openstack/juno/rhel7
#yum-config-manager --add-repo http://www.mellanox.com/repository/solutions/openstack/juno/rhel7
 
</pre>
 
  
 
2. Install Mellanox RPMs:  
 
2. Install Mellanox RPMs:  
<pre>
+
# apt-get install -y openstack-neutron-mellanox eswitchd mlnx-dnsmasq
modify gpgcheck=0 in /etc/yum.conf
 
#apt-get install -y openstack-neutron-mellanox eswitchd mlnx-dnsmasq
 
</pre>
 
  
  
 
3. Edit the following file: '''/usr/lib/systemd/system/neutron-mlnx-agent.service'''
 
3. Edit the following file: '''/usr/lib/systemd/system/neutron-mlnx-agent.service'''
<pre>
+
Change the /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini
Change the /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini
 
</pre>
 
  
 
4. Run:  
 
4. Run:  
<pre>
+
  # service enable neutron-mlnx-agent
  #service enable neutron-mlnx-agent.service
 
</pre>
 
  
 
5. Run:  
 
5. Run:  
<pre>
+
  # service daemon-reload
  #service daemon-reload
 
</pre>
 
  
 
6. Apply MLNX patch for Juno:
 
6. Apply MLNX patch for Juno:
<pre>
+
# pushd /usr/lib/python2.7/site-packages/nova/virt/libvirt/  
#pushd /usr/lib/python2.7/site-packages/nova/virt/libvirt/  
+
# wget http://www.mellanox.com/repository/solutions/openstack/juno/rhel7/patch/mlnx_juno.patch
#wget http://www.mellanox.com/repository/solutions/openstack/juno/rhel7/patch/mlnx_juno.patch
+
# patch < mlnx_juno.patch
#patch < mlnx_juno.patch
+
# popd
#popd
 
</pre>
 
  
 
7. Create the file '''/etc/modprobe.d/mlx4_ib.conf''' and add the following:
 
7. Create the file '''/etc/modprobe.d/mlx4_ib.conf''' and add the following:
<pre>
+
options mlx4_ib sm_guid_assign=0
options mlx4_ib sm_guid_assign=0
 
</pre>
 
  
 
8. Restart Nova:  
 
8. Restart Nova:  
<pre>
+
# /etc/init.d/nova-compute restart
#/etc/init.d/nova-compute restart
 
</pre>
 
  
 
9. Restart the driver:  
 
9. Restart the driver:  
<pre>
+
# /etc/init.d/opensm restart
# /etc/init.d/opensm restart
+
# /etc/init.d/openibd restart
# /etc/init.d/openibd restart
 
</pre>
 
  
 
10. In the file '''/etc/neutron/plugins/mlnx/mlnx_conf.ini''', the parameters tenant_network_type , and network_vlan_ranges should be configured as the controllers:  
 
10. In the file '''/etc/neutron/plugins/mlnx/mlnx_conf.ini''', the parameters tenant_network_type , and network_vlan_ranges should be configured as the controllers:  
<pre>
+
physical_interface_mappings = physnet1:<ib_interface>(for example physnet1:ib0)
physical_interface_mappings = physnet1:<ib_interface>(for example physnet1:ib0)
 
</pre>
 
  
 
11. Modify the file '''/etc/eswitchd/eswitchd.conf''' as follows:
 
11. Modify the file '''/etc/eswitchd/eswitchd.conf''' as follows:
<pre>
 
 
  fabrics = physnet1:<ib_interface> (for example physnet1:ib0)
 
  fabrics = physnet1:<ib_interface> (for example physnet1:ib0)
</pre>
 
  
 
12. Start eSwitchd:  
 
12. Start eSwitchd:  
<pre>
+
# service eswitchd restart
#service eswitchd restart
 
</pre>
 
  
 
13. Start the Neutron agent:  
 
13. Start the Neutron agent:  
<pre>
+
# service neutron-mlnx-agent restart
#service neutron-mlnx-agent restart
 
</pre>
 
  
 
==Known issues and Troubleshooting==
 
==Known issues and Troubleshooting==
 
For known issues and troubleshooting options refer to [https://community.mellanox.com/docs/DOC-1127 Mellanox OpenStack Troubleshooting]
 
For known issues and troubleshooting options refer to [https://community.mellanox.com/docs/DOC-1127 Mellanox OpenStack Troubleshooting]

Revision as of 11:38, 23 July 2015

Overview

Mellanox Neutron ML2 Driver

Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.

This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.

Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide. Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).

The driver supports VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.

  • Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.
  • Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.

Mellanox Neutron Plugin

For details regarding Mellanox Neutron plugin, please refer to https://wiki.openstack.org/wiki/Mellanox-Neutron-Havana-Ubuntu.

Prerequisites

sudo add-apt-repository http://www.mellanox.com/repository/solutions/openstack/juno/ubuntu/

InfiniBand Network

The Mellanox Neutron Plugin use InfiniBand Partitions (PKeys) to separate Networks.

SM Node

OpenSM Configuration – Without UFM

To configure OpenSM:

1. Make sure that all the PKeys are predefined in the /etc/opensm/partitions.conf.

management=0xffff,ipoib, sl=0, defmember=full: ALL, ALL_SWITCHES=full,SELF=full;

2. For every network you want to configure in Neutron, you have to configure the PKey associated with the VLAN of this network (defined in Neutron): vlan1=0x1, ipoib, sl=0, defmember=full : ALL;

Below is an example of the configuration of the partitions.conf file in case you have 10 VLANs defined in the configuration of the file: /etc/neutron/plugins/mlnx/mlnx_conf.ini. ( network_vlan_ranges = physnet1:1:10)

management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
vlan1=0x1, ipoib, sl=0, defmember=full: ALL_CAS;
vlan2=0x2, ipoib, sl=0, defmember=full: ALL_CAS;
vlan3=0x3, ipoib, sl=0, defmember=full: ALL_CAS;
vlan4=0x4, ipoib, sl=0, defmember=full: ALL_CAS;
vlan5=0x5, ipoib, sl=0, defmember=full: ALL_CAS;
vlan6=0x6, ipoib, sl=0, defmember=full: ALL_CAS;
vlan7=0x7, ipoib, sl=0, defmember=full: ALL_CAS;
vlan8=0x8, ipoib, sl=0, defmember=full: ALL_CAS;
vlan9=0x9, ipoib, sl=0, defmember=full: ALL_CAS;
vlan10=0xa, ipoib, sl=0, defmember=full: ALL_CAS;

3. Modify the following line in the file /etc/opensm/opensm.conf from FALSE to TRUE:

allow_both_pkeys TRUE

5. Restart the OpenSM:

# /etc/init.d/opensmd restart

Controller Node

2. Install Mellanox RPMs:

#apt-get install -y openstack-neutron-mellanox eswitchd mlnx-dnsmasq

3. Edit the following file: /usr/lib/systemd/system/neutron-mlnx-agent.service

Change the /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini

4. Run:

# /etc/init.d/neutron-plugin-mlnx-agent start 

5. Run:

# service daemon-reload

6. Make sure ML2 is the current Neutron plugin by checking the core_plugin parameter in /etc/neutron/neutron.conf:

core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin

7. Make sure /etc/neutron/plugin.ini is pointing to /etc/neutron/plugins/ml2/ml2_conf.ini (symbolic link)

8. Modify /etc/neutron/plugins/ml2/ml2_conf.ini by adding the following:

[ml2]
type_drivers = vlan,flat
tenant_network_types = vlan
mechanism_drivers = mlnx, openvswitch
# or mechanism_drivers = mlnx, linuxbridge
[ml2_type_vlan]
network_vlan_ranges = physnet1:1:10
[eswitch]
# (StrOpt) Type of Network Interface to allocate for VM:
# mlnx_direct or hostdev according to libvirt terminology
vnic_type = hostdev

9. Start (or restart) the Neutron server:

# /etc/init.d/neutron-server restart

Network Node

To configure the Network node:

Prerequisites:

E_IPoIB port is configured and up

1. Make sure that eIPoIB module is up and configured in /etc/infiniband/openib.conf: For more information, please refer to eIPoIB configuration in Mellanox OFED User Manual.

E_IPOIB_LOAD=yes


2. Restart openibd:

# service openibd restart


3. Modify the network bridge configuration according to the use of OpenVswitch or LinuxBridge

  • 3.1 OpenVswitch /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings = physnet1:br-<eIPoIB interface>
  • 3.2 LinuxBridge file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini:
[linux_bridge] 
physical_interface_mappings = physnet1:<eIPoIB interface>


NOTE: In order to obtain the eIPoIB interface name, run the ethtool tool command (see below) and check that driver name is eth_ipoib:

# ethtool -i <eIPoIB_interface> 
driver: eth_ipoib
.....


4. Restart neutron-linuxbridge-agent and neutron-dhcp-agent:

# service neutron-linuxbridge-agent restart
# service neutron-openvswitch-agent restart

NOTE: For DHCP support, the Network node should use the Mellanox Dnsmasq driver as the DHCP driver.

DHCP Server (Usually part of the Network node)

1. Modify /etc/neutron/dhcp_agent.ini as follows and according to OVS or Linuxbridge:

dhcp_driver = mlnx_dhcp.MlnxDnsmasq
# or interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver


2. Start DHCP server:

# service neutron-dhcp-agent restart

Compute Nodes

To configure the Compute Node:

1.Configure Mellanox OpenStack Juno repository

# sudo add-apt-repository http://www.mellanox.com/repository/solutions/openstack/juno/rhel7

2. Install Mellanox RPMs:

# apt-get install -y openstack-neutron-mellanox eswitchd mlnx-dnsmasq


3. Edit the following file: /usr/lib/systemd/system/neutron-mlnx-agent.service

Change the /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini

4. Run:

# service enable neutron-mlnx-agent

5. Run:

# service daemon-reload

6. Apply MLNX patch for Juno:

# pushd /usr/lib/python2.7/site-packages/nova/virt/libvirt/ 
# wget http://www.mellanox.com/repository/solutions/openstack/juno/rhel7/patch/mlnx_juno.patch
# patch < mlnx_juno.patch
# popd

7. Create the file /etc/modprobe.d/mlx4_ib.conf and add the following:

options mlx4_ib sm_guid_assign=0

8. Restart Nova:

# /etc/init.d/nova-compute restart

9. Restart the driver:

# /etc/init.d/opensm restart
# /etc/init.d/openibd restart

10. In the file /etc/neutron/plugins/mlnx/mlnx_conf.ini, the parameters tenant_network_type , and network_vlan_ranges should be configured as the controllers:

physical_interface_mappings = physnet1:<ib_interface>(for example physnet1:ib0)

11. Modify the file /etc/eswitchd/eswitchd.conf as follows:

fabrics = physnet1:<ib_interface> (for example physnet1:ib0)

12. Start eSwitchd:

# service eswitchd restart

13. Start the Neutron agent:

  1. service neutron-mlnx-agent restart

Known issues and Troubleshooting

For known issues and troubleshooting options refer to Mellanox OpenStack Troubleshooting