Jump to: navigation, search

Difference between revisions of "Mellanox-Neutron-Juno-Redhat-InfiniBand"

Line 22: Line 22:
 
* SR-IOV enabled on all compute nodes. For more information, please refer to Mellanox Community click [https://community.mellanox.com/docs/DOC-1317| here].  
 
* SR-IOV enabled on all compute nodes. For more information, please refer to Mellanox Community click [https://community.mellanox.com/docs/DOC-1317| here].  
 
* The software package iproute2 - (http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 ) installed on all Compute nodes
 
* The software package iproute2 - (http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 ) installed on all Compute nodes
 +
* Add repository
 +
<pre> yum-config-manager --add-repo http://www.mellanox.com/repository/solutions/openstack/juno/rhel7/ </pre>
  
 
=InfiniBand Network=
 
=InfiniBand Network=
Line 40: Line 42:
  
 
Below is an example of the configuration of the partitions.conf file in case you have 10 VLANs defined in the configuration of the file:
 
Below is an example of the configuration of the partitions.conf file in case you have 10 VLANs defined in the configuration of the file:
/etc/neutron/plugins/mlnx/mlnx_conf.ini. ( network_vlan_ranges = default:1:10)  
+
/etc/neutron/plugins/mlnx/mlnx_conf.ini. ( network_vlan_ranges = physnet1:1:10)  
 
<pre>
 
<pre>
 
management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
 
management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
Line 62: Line 64:
 
5. Restart the OpenSM:  
 
5. Restart the OpenSM:  
 
<pre>
 
<pre>
#service opensmd restart
+
#systemctl restart opensmd.service
 
</pre>
 
</pre>
  
 +
==Controller Node==
  
 +
To configure the Controller node:
 +
1.Configure yum with Mellanox OpenStack Juno repository
 +
<pre>
 +
#yum repo http://www.mellanox.com/repository/solutions/openstack/juno/rhel7
 +
</pre>
 +
 +
2. Install Mellanox RPMs:
 +
<pre>
 +
modify gpgcheck=0 in /etc/yum.conf
 +
#yum install -y openstack-neutron-mellanox eswitchd python-networking-mlnx mlnx-dnsmasq
 +
</pre>
  
==Neutron Server Node==
 
  
To configure the Neutron Server node:  
+
3. Edit the following file: /usr/lib/systemd/system/neutron-mlnx-agent.service
 +
<pre>
 +
Change the  /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini
 +
</pre>
  
1. Make sure ML2 is the current Neutron plugin by checking the core_plugin parameter in /etc/neutron/neutron.conf:  
+
4. Run:
 +
<pre>
 +
#systemctl enable neutron-mlnx-agent.service
 +
</pre>
 +
 
 +
5. Run:
 +
<pre>
 +
#systemctl daemon-reload
 +
</pre>
 +
 
 +
6. Make sure ML2 is the current Neutron plugin by checking the core_plugin parameter in /etc/neutron/neutron.conf:  
 
<pre>
 
<pre>
 
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
 
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
 
</pre>
 
</pre>
  
2. Make sure /etc/neutron/plugin.ini is pointing to /etc/neutron/plugins/ml2/ml2_conf.ini (symbolic link)  
+
7. Make sure /etc/neutron/plugin.ini is pointing to /etc/neutron/plugins/ml2/ml2_conf.ini (symbolic link)  
  
3. Modify /etc/neutron/plugins/ml2/ml2_conf.ini by adding the following:  
+
8. Modify /etc/neutron/plugins/ml2/ml2_conf.ini by adding the following:  
 
<pre>
 
<pre>
 
[ml2]
 
[ml2]
 
type_drivers = vlan,flat
 
type_drivers = vlan,flat
 
tenant_network_types = vlan
 
tenant_network_types = vlan
mechanism_drivers = mlnx, (linuxbridgе|openvswitch)
+
mechanism_drivers = mlnx, openvswitch
 +
# or mechanism_drivers = mlnx, linuxbridge
 
[ml2_type_vlan]
 
[ml2_type_vlan]
 
network_vlan_ranges = <network_name>:2:100
 
network_vlan_ranges = <network_name>:2:100
 +
[eswitch]
 +
# (StrOpt) Type of Network Interface to allocate for VM:
 +
# mlnx_direct or hostdev according to libvirt terminology
 +
vnic_type = hostdev
 
</pre>
 
</pre>
4. Start (or restart) the Neutron server:  
+
9. Start (or restart) the Neutron server:  
  #systemctl restart neutron-server.service
+
  #systemctl restart neutron-server.service
  
 
==Compute Nodes==
 
==Compute Nodes==
Line 95: Line 126:
  
 
1.Configure yum with Mellanox OpenStack Juno repository
 
1.Configure yum with Mellanox OpenStack Juno repository
yum repo
+
<pre>
http://www.mellanox.com/repository/solutions/openstack/juno/rhel7
+
#yum repo http://www.mellanox.com/repository/solutions/openstack/juno/rhel7
 +
</pre>
  
2. Install Mellanox RPMs:  
+
2. Install Mellanox RPMs:  
 
<pre>
 
<pre>
#yum install -y eswitchd  openstack-neutron-mellanox (networking-mlnx only kilo)
+
modify gpgcheck=0 in /etc/yum.conf
 +
#yum install -y openstack-neutron-mellanox eswitchd python-networking-mlnx mlnx-dnsmasq
 +
</pre>
 +
 
 +
 
 +
3. Edit the following file: /usr/lib/systemd/system/neutron-mlnx-agent.service
 +
<pre>
 +
Change the  /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini
 
</pre>
 
</pre>
  
Edit the following file: /usr/lib/systemd/system/neutron-mlnx-agent.service (maybe only for juno)
 
Change the --config file to this path:
 
--config-file /etc/neutron/plugins/mlnx/mlnx_conf.ini
 
 
4. Run:  
 
4. Run:  
 +
<pre>
 
  #systemctl enable neutron-mlnx-agent.service
 
  #systemctl enable neutron-mlnx-agent.service
 +
</pre>
 +
 
5. Run:  
 
5. Run:  
  #systemctl daemon-reload
+
<pre>
6. Copy the file vif_driver_workaround.patch and Run:
+
  #systemctl daemon-reload
  #patch < vif_driver_workaround.patch #You’ll need to give the right python path
+
</pre>
7. Create the file /etc/modprobe.d/mlx4_ib.conf and add the following:  
+
 
 +
6. Apply MLNX patch for Juno:
 +
<pre>
 +
#wget http://www.mellanox.com/repository/solutions/openstack/juno/rhel7/patch/mlnx_juno.patch
 +
  #patch < mlnx_juno.patch
 +
</pre>
 +
 
 +
7. Create the file /etc/modprobe.d/mlx4_ib.conf and add the following:
 +
<pre>
 
options mlx4_ib sm_guid_assign=0
 
options mlx4_ib sm_guid_assign=0
8. Restart the driver:  
+
</pre>
  #service openibd restart
+
 
9. In the file /etc/neutron/plugins/mlnx/mlnx_conf.ini, the parameters tenant_network_type , and network_vlan_ranges should be configured as the controllers:  
+
8. Restart Nova:
physical_interface_mapping = default:ib<your_interface>
+
<pre>
10. Modify the file /etc/eswitchd/eswitchd.conf as follows:  
+
#systemctl restart openstack-nova-compute
  fabrics = default:ib<your_interface> (for example default:ib0)
+
</pre>
12. Restart Nova:
+
 
  #systemctl restart openstack-nova-compute
+
9. Restart the driver:  
13. Start eSwitchd:  
+
<pre>
  #systemctl restart eswitchd
+
  # /bin/systemctl restart opensm.service
14. Start the Neutron agent:  
+
# /etc/init.d/openibd restart
 +
</pre>
 +
 
 +
10. In the file /etc/neutron/plugins/mlnx/mlnx_conf.ini, the parameters tenant_network_type , and network_vlan_ranges should be configured as the controllers:  
 +
<pre>
 +
physical_interface_mappings = physnet1:<ib_interface>(for example physnet1:ib0)
 +
</pre>
 +
 
 +
11. Modify the file /etc/eswitchd/eswitchd.conf as follows:
 +
<pre>
 +
  fabrics = physnet1:<ib_interface> (for example physnet1:ib0)
 +
</pre>
 +
 
 +
12. Start eSwitchd:  
 +
<pre>
 +
#systemctl restart eswitchd
 +
</pre>
 +
 
 +
13. Start the Neutron agent:  
 +
<pre>
 
  #systemctl restart neutron-mlnx-agent
 
  #systemctl restart neutron-mlnx-agent
 +
</pre>
  
 
== Network Node==
 
== Network Node==
To configure the Network node:  
+
To configure the Network node:
 +
 
 
1. Make sure that eIPoIB module is up and configured in /etc/infiniband/openib.conf: For more information, please refer to eIPoIB configuration in Mellanox OFED User Manual.  
 
1. Make sure that eIPoIB module is up and configured in /etc/infiniband/openib.conf: For more information, please refer to eIPoIB configuration in Mellanox OFED User Manual.  
 
  E_IPOIB_LOAD=yes
 
  E_IPOIB_LOAD=yes
 
2. Restart openibd:  
 
2. Restart openibd:  
 
  #service openibd restart
 
  #service openibd restart
3. Modify the file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini as follows: [linux_bridge] physical_interface_mappings = default:<eIPoIB interface>  
+
3. Modify the file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini as follows:
NOTE: In order to obtain the eIPoIB interface name, run the ethtool tool command (see below) and check the driver name:  
+
<pre>
 +
[linux_bridge]  
 +
physical_interface_mappings = physnet1:<eIPoIB interface>
 +
</pre>
 +
if you are using openvswitch than modify /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
 +
<pre>
 +
[ovs]
 +
bridge_mappings = physnet1:br-<eIPoIB interface>
 +
</pre>
 +
 
 +
NOTE: In order to obtain the eIPoIB interface name, run the ethtool tool command (see below) and check that driver name is eth_ipoib:  
 
  #ethtool -i <eIPoIB_interface>  
 
  #ethtool -i <eIPoIB_interface>  
 
  driver: eth_ipoib
 
  driver: eth_ipoib
Line 147: Line 225:
  
 
4. Restart neutron-linuxbridge-agent and neutron-dhcp-agent:  
 
4. Restart neutron-linuxbridge-agent and neutron-dhcp-agent:  
  #service neutron-linuxbridge-agent restart
+
  #systemctl restart neutron-linuxbridge-agent.service
 +
#systemctl restart neutron-openvswitch-agent.service
  
 
NOTE: For DHCP support, the Network node should use the Mellanox Dnsmasq driver as the DHCP driver.  
 
NOTE: For DHCP support, the Network node should use the Mellanox Dnsmasq driver as the DHCP driver.  
 
===DHCP Server (Usually part of the Network node)===
 
===DHCP Server (Usually part of the Network node)===
1. Install the Mellanox Dnsmasq:
+
1. Modify /etc/neutron/dhcp_agent.ini as follows and according to OVS or Linuxbridge:  
#yum install -y  mlnx-dnsmasq
+
<pre>
3. Modify /etc/neutron/dhcp_agent.ini as follows:  
 
 
  dhcp_driver = mlnx_dhcp.MlnxDnsmasq
 
  dhcp_driver = mlnx_dhcp.MlnxDnsmasq
  interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver/neutron.agent.linux.interface.OVSInterfaceDriver
+
  dhcp_broadcast_reply = True
4. Start DHCP server:  
+
#or interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
  #service neutron-dhcp-agent restart
+
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
 +
</pre>
 +
2. Start DHCP server:  
 +
  #systemctl restart neutron-dhcp-agent.service
  
 
==Known issues and Troubleshooting==
 
==Known issues and Troubleshooting==
 
For known issues and troubleshooting options refer to [https://community.mellanox.com/docs/DOC-1127 Mellanox OpenStack Troubleshooting]
 
For known issues and troubleshooting options refer to [https://community.mellanox.com/docs/DOC-1127 Mellanox OpenStack Troubleshooting]

Revision as of 13:26, 14 July 2015

Overview

Mellanox Neutron ML2 Driver

Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.

This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.

Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) and MACVTAP (virtual interface with a tap-like software interface) vnic types. For vnic type configuration API details, please refer to configuration reference guide (click here). Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).

The driver supports VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.

  • Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.
  • Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.

Mellanox Neutron Plugin

For details regarding Mellanox Neutron plugin, please refer to https://wiki.openstack.org/wiki/Mellanox-Neutron-Havana-Redhat.

Prerequisites

 yum-config-manager --add-repo http://www.mellanox.com/repository/solutions/openstack/juno/rhel7/ 

InfiniBand Network

The Mellanox Neutron Plugin use InfiniBand Partitions (PKeys) to separate Networks.

SM Node

OpenSM Configuration – Without UFM

To configure OpenSM:

1. Make sure that all the PKeys are predefined in the partitions.conf file (/etc/opensm/partitions.conf).

2. Add/Change the following in the partitions.conf file:

management=0xffff,ipoib, sl=0, defmember=full: ALL, ALL_SWITCHES=full,SELF=full;

3. For every network you want to configure in Neutron, you have to configure the PKey associated with the VLAN of this network (defined in Neutron): vlan1=0x1, ipoib, sl=0, defmember=full : ALL;

Below is an example of the configuration of the partitions.conf file in case you have 10 VLANs defined in the configuration of the file: /etc/neutron/plugins/mlnx/mlnx_conf.ini. ( network_vlan_ranges = physnet1:1:10)

management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
vlan1=0x1, ipoib, sl=0, defmember=full: ALL_CAS;
vlan2=0x2, ipoib, sl=0, defmember=full: ALL_CAS;
vlan3=0x3, ipoib, sl=0, defmember=full: ALL_CAS;
vlan4=0x4, ipoib, sl=0, defmember=full: ALL_CAS;
vlan5=0x5, ipoib, sl=0, defmember=full: ALL_CAS;
vlan6=0x6, ipoib, sl=0, defmember=full: ALL_CAS;
vlan7=0x7, ipoib, sl=0, defmember=full: ALL_CAS;
vlan8=0x8, ipoib, sl=0, defmember=full: ALL_CAS;
vlan9=0x9, ipoib, sl=0, defmember=full: ALL_CAS;
vlan10=0xa, ipoib, sl=0, defmember=full: ALL_CAS;

4. Modify the following line in the file /etc/opensm/opensm.conf from FALSE to TRUE:

allow_both_pkeys TRUE

5. Restart the OpenSM:

#systemctl restart opensmd.service

Controller Node

To configure the Controller node: 1.Configure yum with Mellanox OpenStack Juno repository

#yum repo http://www.mellanox.com/repository/solutions/openstack/juno/rhel7

2. Install Mellanox RPMs:

modify gpgcheck=0 in /etc/yum.conf
#yum install -y openstack-neutron-mellanox eswitchd python-networking-mlnx mlnx-dnsmasq


3. Edit the following file: /usr/lib/systemd/system/neutron-mlnx-agent.service

 
Change the  /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini

4. Run:

 #systemctl enable neutron-mlnx-agent.service

5. Run:

 #systemctl daemon-reload

6. Make sure ML2 is the current Neutron plugin by checking the core_plugin parameter in /etc/neutron/neutron.conf:

core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin

7. Make sure /etc/neutron/plugin.ini is pointing to /etc/neutron/plugins/ml2/ml2_conf.ini (symbolic link)

8. Modify /etc/neutron/plugins/ml2/ml2_conf.ini by adding the following:

[ml2]
type_drivers = vlan,flat
tenant_network_types = vlan
mechanism_drivers = mlnx, openvswitch
# or mechanism_drivers = mlnx, linuxbridge
[ml2_type_vlan]
network_vlan_ranges = <network_name>:2:100
[eswitch]
# (StrOpt) Type of Network Interface to allocate for VM:
# mlnx_direct or hostdev according to libvirt terminology
vnic_type = hostdev

9. Start (or restart) the Neutron server:

#systemctl restart neutron-server.service

Compute Nodes

To configure the Compute Node:

1.Configure yum with Mellanox OpenStack Juno repository

#yum repo http://www.mellanox.com/repository/solutions/openstack/juno/rhel7

2. Install Mellanox RPMs:

modify gpgcheck=0 in /etc/yum.conf
#yum install -y openstack-neutron-mellanox eswitchd python-networking-mlnx mlnx-dnsmasq


3. Edit the following file: /usr/lib/systemd/system/neutron-mlnx-agent.service

 
Change the  /etc/neutron/plugins/mlnx/mlnx.ini to /etc/neutron/plugins/mlnx/mlnx_conf.ini

4. Run:

 #systemctl enable neutron-mlnx-agent.service

5. Run:

 #systemctl daemon-reload

6. Apply MLNX patch for Juno:

 #wget http://www.mellanox.com/repository/solutions/openstack/juno/rhel7/patch/mlnx_juno.patch
 #patch < mlnx_juno.patch

7. Create the file /etc/modprobe.d/mlx4_ib.conf and add the following:

options mlx4_ib sm_guid_assign=0

8. Restart Nova:

 #systemctl restart openstack-nova-compute

9. Restart the driver:

 # /bin/systemctl restart opensm.service
 # /etc/init.d/openibd restart

10. In the file /etc/neutron/plugins/mlnx/mlnx_conf.ini, the parameters tenant_network_type , and network_vlan_ranges should be configured as the controllers:

physical_interface_mappings = physnet1:<ib_interface>(for example physnet1:ib0)

11. Modify the file /etc/eswitchd/eswitchd.conf as follows:

 fabrics = physnet1:<ib_interface> (for example physnet1:ib0)

12. Start eSwitchd:

 #systemctl restart eswitchd

13. Start the Neutron agent:

 #systemctl restart neutron-mlnx-agent

Network Node

To configure the Network node:

1. Make sure that eIPoIB module is up and configured in /etc/infiniband/openib.conf: For more information, please refer to eIPoIB configuration in Mellanox OFED User Manual.

E_IPOIB_LOAD=yes

2. Restart openibd:

#service openibd restart

3. Modify the file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini as follows:

[linux_bridge] 
physical_interface_mappings = physnet1:<eIPoIB interface>
 

if you are using openvswitch than modify /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

[ovs]
bridge_mappings = physnet1:br-<eIPoIB interface>

NOTE: In order to obtain the eIPoIB interface name, run the ethtool tool command (see below) and check that driver name is eth_ipoib:

#ethtool -i <eIPoIB_interface> 
driver: eth_ipoib
version: 1.0.0
firmware-version: 1 
bus-info: ib0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no

4. Restart neutron-linuxbridge-agent and neutron-dhcp-agent:

#systemctl restart neutron-linuxbridge-agent.service
#systemctl restart neutron-openvswitch-agent.service

NOTE: For DHCP support, the Network node should use the Mellanox Dnsmasq driver as the DHCP driver.

DHCP Server (Usually part of the Network node)

1. Modify /etc/neutron/dhcp_agent.ini as follows and according to OVS or Linuxbridge:

 dhcp_driver = mlnx_dhcp.MlnxDnsmasq
 dhcp_broadcast_reply = True
#or interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

2. Start DHCP server:

#systemctl restart neutron-dhcp-agent.service

Known issues and Troubleshooting

For known issues and troubleshooting options refer to Mellanox OpenStack Troubleshooting