Jump to: navigation, search

Difference between revisions of "Mellanox-Neutron-Kilo-Redhat-InfiniBand"

(InfiniBand Network)
 
(32 intermediate revisions by 2 users not shown)
Line 2: Line 2:
  
 
==Mellanox Neutron ML2 Driver==
 
==Mellanox Neutron ML2 Driver==
 +
  
 
Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.  
 
Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.  
  
This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.  
+
This driver supports Mellanox embedded switch functionality as part of the InfiniBand HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.  
  
Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide (click here). Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).  
+
Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide.
 +
Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).  
  
The driver supports VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.  
+
The driver supports VLAN network type to facilitate virtual networks either on InfiniBand fabrics.  
 
* Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.  
 
* Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.  
 
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.  
 
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.  
  
 
===Prerequisites===
 
===Prerequisites===
* A running OpenStack environment installed with the ML2 plugin on top of Linux Bridge.
+
* A running OpenStack environment installed with the ML2 plugin on top of OpenVswitch or Linux Bridge.
* All nodes equipped with Mellanox ConnectX®-3 Network Adapter (http://www.mellanox.com/page/products_dyn?product_family=119)
+
* All nodes equipped with Mellanox ConnectX®-3 Network Adapter [http://www.mellanox.com/page/products_dyn?product_family=119]
* Mellanox OFED 2.4 or greater installed on all nodes. Please refer to Mellanox website for the latest OFED: http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers
+
* Mellanox OFED 2.4 or greater installed on all nodes. Please refer to Mellanox website for the latest OFED: [http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers]
 
* SR-IOV enabled on all compute nodes. For more information, please refer to Mellanox Community click [https://community.mellanox.com/docs/DOC-1317].  
 
* SR-IOV enabled on all compute nodes. For more information, please refer to Mellanox Community click [https://community.mellanox.com/docs/DOC-1317].  
* The software package iproute2 - (http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 ) installed on all Compute nodes
+
* The software package iproute2 [http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2] installed on all Compute nodes
 
* Add repository  
 
* Add repository  
<pre> yum-config-manager --add-repo http://www.mellanox.com/repository/solutions/openstack/kilo/rhel7/ </pre>
+
yum-config-manager --add-repo http://www.mellanox.com/repository/solutions/openstack/kilo/rhel7/
  
 
=InfiniBand Network=
 
=InfiniBand Network=
 
The Mellanox Neutron Plugin use InfiniBand Partitions (PKeys) to separate Networks.  
 
The Mellanox Neutron Plugin use InfiniBand Partitions (PKeys) to separate Networks.  
  
==SM Configuration==
+
==SM Node==
 +
 
 +
=== OpenSM Provisioning with SDN Mechanism Driver ===
 +
 
 +
[https://community.mellanox.com/docs/DOC-2155 SDN Mechanism Driver] allows OpenSM dynamically assign PKs in the IB network.
  
=== Auto SM Configuration  with SDN and [http://www.mellanox.com/page/products_dyn?product_family=220&mtag=mellanox_neo NEO] ===
+
More details about applying SDN Mechanism Driver with [http://www.mellanox.com/page/products_dyn?product_family=220&mtag=mellanox_neo NEO] can be found [https://community.mellanox.com/docs/DOC-2251 here]
SDN Mechanism Driver allows you automatically configure SM and VLANS.
 
more details can be found [https://community.mellanox.com/docs/DOC-2155 here]  
 
  
 
=== Manual OpenSM Configuration ===
 
=== Manual OpenSM Configuration ===
  
To configure OpenSM:  
+
All the PKeys should be predefined in the partitions.conf file (/etc/opensm/partitions.conf)
 +
(Automatic cofiguration is planned in future phase)
 +
 
 +
Add/Change the following in the partitions.conf file
 +
  management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
 +
  vlan1=0x1, ipoib, sl=0, defmember=full : ALL;
 +
  vlan2=0x2, ipoib, sl=0, defmember=full : ALL;
 +
  vlan3=0x3, ipoib, sl=0, defmember=full : ALL;
 +
  vlan4=0x4, ipoib, sl=0, defmember=full : ALL;
 +
  vlan5=0x5, ipoib, sl=0, defmember=full : ALL;
 +
  vlan6=0x6, ipoib, sl=0, defmember=full : ALL;
 +
  vlan7=0x7, ipoib, sl=0, defmember=full : ALL;
 +
  vlan8=0x8, ipoib, sl=0, defmember=full : ALL;
 +
  vlan9=0x9, ipoib, sl=0, defmember=full : ALL;
 +
  vlan10=0xa, ipoib, sl=0, defmember=full : ALL;
 +
 
 +
 
 +
For example:
 +
If we have 10 VLANs defined in configuration in /etc/neutron/plugins/mlnx/mlnx_conf.ini
 +
 
 +
  [MLNX]
 +
  network_vlan_ranges = physnet1:1:10
  
1. Make sure that all the PKeys are predefined in the partitions.conf file ('''/etc/opensm/partitions.conf''').
 
  
2. Add/Change the following in the partitions.conf file:  
+
Change the following in /etc/opensm/opensm.conf:
management=0xffff,ipoib, sl=0, defmember=full: ALL, ALL_SWITCHES=full,SELF=full;
+
  allow_both_pkeys TRUE
  
3. For every network you want to configure in Neutron, you have to configure the PKey associated with the VLAN of this network (defined in Neutron): vlan1=0x1, ipoib, sl=0, defmember=full : ALL;
+
4. Restart the OpenSM:
 +
# systemctl restart opensm.service
  
Below is an example of the configuration of the partitions.conf file in case you have 10 VLANs defined in the configuration of the file:
+
== RDO installation ==
'''/etc/neutron/plugins/mlnx/mlnx_conf.ini'''. ( network_vlan_ranges = physnet1:1:10)
+
To install and configure packstack
<pre>
 
management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
 
vlan1=0x1, ipoib, sl=0, defmember=full: ALL_CAS;
 
vlan2=0x2, ipoib, sl=0, defmember=full: ALL_CAS;
 
vlan3=0x3, ipoib, sl=0, defmember=full: ALL_CAS;
 
vlan4=0x4, ipoib, sl=0, defmember=full: ALL_CAS;
 
vlan5=0x5, ipoib, sl=0, defmember=full: ALL_CAS;
 
vlan6=0x6, ipoib, sl=0, defmember=full: ALL_CAS;
 
vlan7=0x7, ipoib, sl=0, defmember=full: ALL_CAS;
 
vlan8=0x8, ipoib, sl=0, defmember=full: ALL_CAS;
 
vlan9=0x9, ipoib, sl=0, defmember=full: ALL_CAS;
 
vlan10=0xa, ipoib, sl=0, defmember=full: ALL_CAS;
 
</pre>
 
  
4. Modify the following line in the file '''/etc/opensm/opensm.conf''' from FALSE to TRUE:
+
1. Install packstack
<pre>
+
# yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-kilo/rdo-release-kilo-1.noarch.rpm
allow_both_pkeys TRUE
+
# yum install -y openstack-packstack
</pre>
 
  
5. Restart the OpenSM:  
+
2. Prepare answer file
<pre>
+
# packstack --gen-answer-file=packstack.sriov
#systemctl restart opensmd.service
+
 
</pre>
+
3. Modify answer file
 +
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-enp3s0
 +
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-enp3s0:enp3s0
 +
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vlan
 +
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan
 +
CONFIG_NEUTRON_ML2_VLAN_RANGES=physnet1:1:10
 +
CONFIG_NEUTRON_L2_AGENT=openvswitch
 +
 
 +
4. Run packstack
 +
# packstack --answer-file=packstack.sriov
  
 
==Controller Node==
 
==Controller Node==
  
To configure the Controller node:  
+
To configure the Controller node:
1.Configure yum with Mellanox OpenStack Kilo repository
+
 
<pre>
+
1. Configure yum with Mellanox OpenStack Kilo repository
#yum-config-manager --add-repo http://www.mellanox.com/repository/solutions/openstack/kilo/rhel7
+
# yum-config-manager --add-repo http://www.mellanox.com/repository/solutions/openstack/kilo/rhel7
</pre>
 
  
 
2. Install Mellanox RPMs:  
 
2. Install Mellanox RPMs:  
<pre>
+
# yum install -y --nogpgcheck openstack-neutron-mellanox python-networking-mlnx  
modify gpgcheck=0 in /etc/yum.conf
+
 
#yum install -y openstack-neutron-mellanox eswitchd python-networking-mlnx mlnx-dnsmasq
+
3. Make sure ML2 is the current Neutron plugin by checking the core_plugin parameter in '''/etc/neutron/neutron.conf''':
</pre>
+
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
 +
 
 +
4. Make sure '''/etc/neutron/plugin.ini''' is pointing to '''/etc/neutron/plugins/ml2/ml2_conf.ini''' (symbolic link)
 +
 
 +
5. Modify '''/etc/neutron/plugins/ml2/ml2_conf.ini''' by adding the following:
 +
[ml2]
 +
type_drivers = vlan,flat
 +
tenant_network_types = vlan
 +
mechanism_drivers = mlnx, openvswitch
 +
# or mechanism_drivers = mlnx, linuxbridge
 +
[ml2_type_vlan]
 +
network_vlan_ranges = physnet1:1:10
 +
[eswitch]
 +
# (StrOpt) Type of Network Interface to allocate for VM:
 +
# mlnx_direct or hostdev according to libvirt terminology
 +
vnic_type = hostdev
 +
 
 +
6. Start (or restart) the Neutron server:
 +
# systemctl restart neutron-server.service
 +
 
 +
== Network Node==
 +
To configure the Network node:
 +
 
 +
Prerequisites:
 +
E_IPoIB port is configured and up
 +
 
 +
1. Make sure that eIPoIB module is up and configured in '''/etc/infiniband/openib.conf''': For more information, please refer to eIPoIB configuration in Mellanox OFED User Manual.
 +
E_IPOIB_LOAD=yes
  
3. Edit the following file: /usr/lib/systemd/system/neutron-mlnx-agent.service
 
<pre>
 
Change the  /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini
 
</pre>
 
  
4. Run:  
+
2. Restart openibd:
<pre>
+
# service openibd restart
  #systemctl enable neutron-mlnx-agent.service
+
 
</pre>
+
3. Configure yum with Mellanox OpenStack Kilo repository
 +
# yum-config-manager --add-repo http://www.mellanox.com/repository/solutions/openstack/kilo/rhel7
 +
 
 +
4. Install Mellanox RPMs:
 +
# yum install -y --nogpgcheck mlnx-dsnmsq
 +
 
 +
3. Modify the network bridge configuration according to the use of OpenVswitch or LinuxBridge
 +
* 3.1 OpenVswitch '''/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini'''
 +
[ovs]
 +
bridge_mappings = physnet1:br-<eIPoIB interface>
 +
* 3.2 LinuxBridge file '''/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini''':
 +
[linux_bridge]
 +
physical_interface_mappings = physnet1:<eIPoIB interface>
 +
 
 +
 
 +
NOTE: In order to obtain the eIPoIB interface name, run the ethtool tool command (see below) and check that driver name is eth_ipoib:
 +
  # ethtool -i <eIPoIB_interface>
 +
driver: eth_ipoib
 +
.....
 +
 
 +
 
 +
4. Restart network bridge and neutron-dhcp-agent:
 +
# service neutron-dhcp-agent restart
 +
* 4.1 OpenVswitch
 +
# service neutron-openvswitch-agent restart
 +
* 4.2 LinuxBridge
 +
# service neutron-linuxbridge-agent restart
  
5. Run:
 
<pre>
 
#systemctl daemon-reload
 
</pre>
 
  
6. Make sure ML2 is the current Neutron plugin by checking the core_plugin parameter in /etc/neutron/neutron.conf:
+
NOTE: For DHCP support, the Network node should use the Mellanox Dnsmasq driver as the DHCP driver.  
<pre>
 
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
 
</pre>
 
  
7. Make sure '''/etc/neutron/plugin.ini''' is pointing to '''/etc/neutron/plugins/ml2/ml2_conf.ini''' (symbolic link)  
+
===DHCP Server (Usually part of the Network node)===
  
8. Modify '''/etc/neutron/plugins/ml2/ml2_conf.ini''' by adding the following:  
+
1. Modify '''/etc/neutron/dhcp_agent.ini''' as follows and according to OVS or Linuxbridge:  
<pre>
+
dhcp_driver = mlnx_dhcp.MlnxDnsmasq
[ml2]
+
dhcp_broadcast_reply = True
type_drivers = vlan,flat
+
1.1 For OVS
tenant_network_types = vlan
+
    interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
mechanism_drivers = mlnx, openvswitch
+
1.2 For Linux Bridge
# or mechanism_drivers = mlnx, linuxbridge
+
      interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
[ml2_type_vlan]
+
 
network_vlan_ranges = physnet1:1:10
+
 
[eswitch]
+
2. Start DHCP server:  
# (StrOpt) Type of Network Interface to allocate for VM:
+
  # systemctl restart neutron-dhcp-agent.service
# mlnx_direct or hostdev according to libvirt terminology
 
vnic_type = hostdev
 
</pre>
 
9. Start (or restart) the Neutron server:  
 
  #systemctl restart neutron-server.service
 
  
 
==Compute Nodes==
 
==Compute Nodes==
Line 126: Line 182:
  
 
1.Configure yum with Mellanox OpenStack Kilo repository
 
1.Configure yum with Mellanox OpenStack Kilo repository
<pre>
+
# yum-config-manager --add-repo http://www.mellanox.com/repository/solutions/openstack/kilo/rhel7
#yum-config-manager --add-repo http://www.mellanox.com/repository/solutions/openstack/kilo/rhel7
 
</pre>
 
  
 
2. Install Mellanox RPMs:  
 
2. Install Mellanox RPMs:  
<pre>
+
# yum install --nogpgcheck -y openstack-neutron-mellanox eswitchd python-networking-mlnx
modify gpgcheck=0 in /etc/yum.conf
+
 
#yum install -y openstack-neutron-mellanox eswitchd python-networking-mlnx mlnx-dnsmasq
 
</pre>
 
  
 
3. Edit the following file: '''/usr/lib/systemd/system/neutron-mlnx-agent.service'''
 
3. Edit the following file: '''/usr/lib/systemd/system/neutron-mlnx-agent.service'''
<pre>
+
Change the /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini
Change the /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini
 
</pre>
 
  
 
4. Run:  
 
4. Run:  
<pre>
+
  # systemctl enable neutron-mlnx-agent.service
  #systemctl enable neutron-mlnx-agent.service
 
</pre>
 
  
 
5. Run:  
 
5. Run:  
<pre>
+
  # systemctl daemon-reload
  #systemctl daemon-reload
 
</pre>
 
  
 
6. Apply MLNX patch for Kilo:
 
6. Apply MLNX patch for Kilo:
<pre>
+
  # pushd /usr/lib/python2.7/site-packages/nova/virt/libvirt/  
  #pushd /usr/lib/python2.7/site-packages/nova/virt/libvirt/  
+
  # wget http://www.mellanox.com/repository/solutions/openstack/kilo/patch/mlnx_kilo.patch
  #wget http://www.mellanox.com/repository/solutions/openstack/kilo/rhel7/patch/mlnx_kilo.patch
+
  # patch < mlnx_kilo.patch
  #patch < mlnx_kilo.patch
+
  # popd
  #popd
 
</pre>
 
  
 
7. Create the file '''/etc/modprobe.d/mlx4_ib.conf''' and add the following:   
 
7. Create the file '''/etc/modprobe.d/mlx4_ib.conf''' and add the following:   
<pre>
+
options mlx4_ib sm_guid_assign=0
options mlx4_ib sm_guid_assign=0
 
</pre>
 
  
 
8. Restart Nova:  
 
8. Restart Nova:  
<pre>
+
  # systemctl restart openstack-nova-compute
  #systemctl restart openstack-nova-compute
 
</pre>
 
  
 
9. Restart the driver:  
 
9. Restart the driver:  
<pre>
 
 
  # /bin/systemctl restart opensm.service
 
  # /bin/systemctl restart opensm.service
 
  # /etc/init.d/openibd restart
 
  # /etc/init.d/openibd restart
</pre>
 
  
 
10. In the file '''/etc/neutron/plugins/mlnx/mlnx_conf.ini''', the parameters tenant_network_type , and network_vlan_ranges should be configured as the controllers:  
 
10. In the file '''/etc/neutron/plugins/mlnx/mlnx_conf.ini''', the parameters tenant_network_type , and network_vlan_ranges should be configured as the controllers:  
<pre>
+
physical_interface_mappings = physnet1:<ib_interface>(for example physnet1:ib0)
physical_interface_mappings = physnet1:<ib_interface>(for example physnet1:ib0)
 
</pre>
 
  
 
11. Modify the file '''/etc/eswitchd/eswitchd.conf''' as follows:
 
11. Modify the file '''/etc/eswitchd/eswitchd.conf''' as follows:
<pre>
 
 
  fabrics = physnet1:<ib_interface> (for example physnet1:ib0)
 
  fabrics = physnet1:<ib_interface> (for example physnet1:ib0)
</pre>
 
  
 
12. Start eSwitchd:  
 
12. Start eSwitchd:  
<pre>
+
  # systemctl restart eswitchd
  #systemctl restart eswitchd
 
</pre>
 
  
 
13. Start the Neutron agent:  
 
13. Start the Neutron agent:  
<pre>
+
  # systemctl restart neutron-mlnx-agent
  #systemctl restart neutron-mlnx-agent
 
</pre>
 
 
 
== Network Node==
 
To configure the Network node:
 
 
 
1. Make sure that eIPoIB module is up and configured in '''/etc/infiniband/openib.conf''': For more information, please refer to eIPoIB configuration in Mellanox OFED User Manual.
 
E_IPOIB_LOAD=yes
 
2. Restart openibd:
 
#service openibd restart
 
3. Modify the file '''/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini''' as follows:
 
<pre>
 
[linux_bridge]
 
physical_interface_mappings = physnet1:<eIPoIB interface>
 
</pre>
 
if you are using openvswitch than modify /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
 
<pre>
 
[ovs]
 
bridge_mappings = physnet1:br-<eIPoIB interface>
 
</pre>
 
 
 
NOTE: In order to obtain the eIPoIB interface name, run the ethtool tool command (see below) and check that driver name is eth_ipoib:
 
#ethtool -i <eIPoIB_interface>
 
driver: eth_ipoib
 
version: 1.0.0
 
firmware-version: 1
 
bus-info: ib0
 
supports-statistics: yes
 
supports-test: no
 
supports-eeprom-access: no
 
supports-register-dump: no
 
supports-priv-flags: no
 
 
 
4. Restart neutron-linuxbridge-agent and neutron-dhcp-agent:
 
#systemctl restart neutron-linuxbridge-agent.service
 
#systemctl restart neutron-openvswitch-agent.service
 
 
 
NOTE: For DHCP support, the Network node should use the Mellanox Dnsmasq driver as the DHCP driver.
 
===DHCP Server (Usually part of the Network node)===
 
1. Modify /etc/neutron/dhcp_agent.ini as follows and according to OVS or Linuxbridge:
 
<pre>
 
dhcp_driver = mlnx_dhcp.MlnxDnsmasq
 
dhcp_broadcast_reply = True
 
#or interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
 
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
 
</pre>
 
2. Start DHCP server:
 
#systemctl restart neutron-dhcp-agent.service
 
  
 
==Known issues and Troubleshooting==
 
==Known issues and Troubleshooting==
 
For known issues and troubleshooting options refer to [https://community.mellanox.com/docs/DOC-1127 Mellanox OpenStack Troubleshooting]
 
For known issues and troubleshooting options refer to [https://community.mellanox.com/docs/DOC-1127 Mellanox OpenStack Troubleshooting]

Latest revision as of 16:01, 22 March 2016

Overview

Mellanox Neutron ML2 Driver

Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.

This driver supports Mellanox embedded switch functionality as part of the InfiniBand HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.

Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide. Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).

The driver supports VLAN network type to facilitate virtual networks either on InfiniBand fabrics.

  • Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.
  • Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.

Prerequisites

  • A running OpenStack environment installed with the ML2 plugin on top of OpenVswitch or Linux Bridge.
  • All nodes equipped with Mellanox ConnectX®-3 Network Adapter [1]
  • Mellanox OFED 2.4 or greater installed on all nodes. Please refer to Mellanox website for the latest OFED: [2]
  • SR-IOV enabled on all compute nodes. For more information, please refer to Mellanox Community click [3].
  • The software package iproute2 [4] installed on all Compute nodes
  • Add repository
yum-config-manager --add-repo http://www.mellanox.com/repository/solutions/openstack/kilo/rhel7/

InfiniBand Network

The Mellanox Neutron Plugin use InfiniBand Partitions (PKeys) to separate Networks.

SM Node

OpenSM Provisioning with SDN Mechanism Driver

SDN Mechanism Driver allows OpenSM dynamically assign PKs in the IB network.

More details about applying SDN Mechanism Driver with NEO can be found here

Manual OpenSM Configuration

All the PKeys should be predefined in the partitions.conf file (/etc/opensm/partitions.conf) (Automatic cofiguration is planned in future phase)

Add/Change the following in the partitions.conf file

  management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
  vlan1=0x1, ipoib, sl=0, defmember=full : ALL;
  vlan2=0x2, ipoib, sl=0, defmember=full : ALL;
  vlan3=0x3, ipoib, sl=0, defmember=full : ALL;
  vlan4=0x4, ipoib, sl=0, defmember=full : ALL;
  vlan5=0x5, ipoib, sl=0, defmember=full : ALL;
  vlan6=0x6, ipoib, sl=0, defmember=full : ALL;
  vlan7=0x7, ipoib, sl=0, defmember=full : ALL;
  vlan8=0x8, ipoib, sl=0, defmember=full : ALL;
  vlan9=0x9, ipoib, sl=0, defmember=full : ALL;
  vlan10=0xa, ipoib, sl=0, defmember=full : ALL;


For example: If we have 10 VLANs defined in configuration in /etc/neutron/plugins/mlnx/mlnx_conf.ini

  [MLNX]
  network_vlan_ranges = physnet1:1:10


Change the following in /etc/opensm/opensm.conf:

  allow_both_pkeys TRUE

4. Restart the OpenSM:

# systemctl restart opensm.service

RDO installation

To install and configure packstack

1. Install packstack

# yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-kilo/rdo-release-kilo-1.noarch.rpm
# yum install -y openstack-packstack

2. Prepare answer file

# packstack --gen-answer-file=packstack.sriov

3. Modify answer file

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-enp3s0
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-enp3s0:enp3s0
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan
CONFIG_NEUTRON_ML2_VLAN_RANGES=physnet1:1:10
CONFIG_NEUTRON_L2_AGENT=openvswitch

4. Run packstack

# packstack --answer-file=packstack.sriov

Controller Node

To configure the Controller node:

1. Configure yum with Mellanox OpenStack Kilo repository

# yum-config-manager --add-repo http://www.mellanox.com/repository/solutions/openstack/kilo/rhel7

2. Install Mellanox RPMs:

# yum install -y --nogpgcheck openstack-neutron-mellanox python-networking-mlnx 

3. Make sure ML2 is the current Neutron plugin by checking the core_plugin parameter in /etc/neutron/neutron.conf:

core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin

4. Make sure /etc/neutron/plugin.ini is pointing to /etc/neutron/plugins/ml2/ml2_conf.ini (symbolic link)

5. Modify /etc/neutron/plugins/ml2/ml2_conf.ini by adding the following:

[ml2]
type_drivers = vlan,flat
tenant_network_types = vlan
mechanism_drivers = mlnx, openvswitch
# or mechanism_drivers = mlnx, linuxbridge
[ml2_type_vlan]
network_vlan_ranges = physnet1:1:10
[eswitch]
# (StrOpt) Type of Network Interface to allocate for VM:
# mlnx_direct or hostdev according to libvirt terminology
vnic_type = hostdev

6. Start (or restart) the Neutron server:

# systemctl restart neutron-server.service

Network Node

To configure the Network node:

Prerequisites:

E_IPoIB port is configured and up

1. Make sure that eIPoIB module is up and configured in /etc/infiniband/openib.conf: For more information, please refer to eIPoIB configuration in Mellanox OFED User Manual.

E_IPOIB_LOAD=yes


2. Restart openibd:

# service openibd restart

3. Configure yum with Mellanox OpenStack Kilo repository

# yum-config-manager --add-repo http://www.mellanox.com/repository/solutions/openstack/kilo/rhel7

4. Install Mellanox RPMs:

# yum install -y --nogpgcheck mlnx-dsnmsq

3. Modify the network bridge configuration according to the use of OpenVswitch or LinuxBridge

  • 3.1 OpenVswitch /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings = physnet1:br-<eIPoIB interface>
  • 3.2 LinuxBridge file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini:
[linux_bridge] 
physical_interface_mappings = physnet1:<eIPoIB interface>


NOTE: In order to obtain the eIPoIB interface name, run the ethtool tool command (see below) and check that driver name is eth_ipoib:

# ethtool -i <eIPoIB_interface> 
driver: eth_ipoib
.....


4. Restart network bridge and neutron-dhcp-agent:

# service neutron-dhcp-agent restart
  • 4.1 OpenVswitch
# service neutron-openvswitch-agent restart
  • 4.2 LinuxBridge
# service neutron-linuxbridge-agent restart


NOTE: For DHCP support, the Network node should use the Mellanox Dnsmasq driver as the DHCP driver.

DHCP Server (Usually part of the Network node)

1. Modify /etc/neutron/dhcp_agent.ini as follows and according to OVS or Linuxbridge:

dhcp_driver = mlnx_dhcp.MlnxDnsmasq
dhcp_broadcast_reply = True
1.1 For OVS
    interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
1.2 For Linux Bridge
     interface_driver =  neutron.agent.linux.interface.BridgeInterfaceDriver


2. Start DHCP server:

# systemctl restart neutron-dhcp-agent.service

Compute Nodes

To configure the Compute Node:

1.Configure yum with Mellanox OpenStack Kilo repository

# yum-config-manager --add-repo http://www.mellanox.com/repository/solutions/openstack/kilo/rhel7

2. Install Mellanox RPMs:

# yum install --nogpgcheck -y openstack-neutron-mellanox eswitchd  python-networking-mlnx


3. Edit the following file: /usr/lib/systemd/system/neutron-mlnx-agent.service

Change the /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini

4. Run:

# systemctl enable neutron-mlnx-agent.service

5. Run:

# systemctl daemon-reload

6. Apply MLNX patch for Kilo:

# pushd /usr/lib/python2.7/site-packages/nova/virt/libvirt/ 
# wget http://www.mellanox.com/repository/solutions/openstack/kilo/patch/mlnx_kilo.patch
# patch < mlnx_kilo.patch
# popd

7. Create the file /etc/modprobe.d/mlx4_ib.conf and add the following:

options mlx4_ib sm_guid_assign=0

8. Restart Nova:

# systemctl restart openstack-nova-compute

9. Restart the driver:

# /bin/systemctl restart opensm.service
# /etc/init.d/openibd restart

10. In the file /etc/neutron/plugins/mlnx/mlnx_conf.ini, the parameters tenant_network_type , and network_vlan_ranges should be configured as the controllers:

physical_interface_mappings = physnet1:<ib_interface>(for example physnet1:ib0)

11. Modify the file /etc/eswitchd/eswitchd.conf as follows:

fabrics = physnet1:<ib_interface> (for example physnet1:ib0)

12. Start eSwitchd:

# systemctl restart eswitchd

13. Start the Neutron agent:

# systemctl restart neutron-mlnx-agent

Known issues and Troubleshooting

For known issues and troubleshooting options refer to Mellanox OpenStack Troubleshooting