Mellanox-Neutron-Icehouse-Redhat-InfiniBand
Contents
Overview
Mellanox Neutron ML2 Driver
Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.
This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.
Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) and MACVTAP (virtual interface with a tap-like software interface) vnic types. For vnic type configuration API details, please refer to configuration reference guide (click here). Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).
The driver supports VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.
• Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.
• Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.
Mellanox Neutron Plugin
Please note, Mellanox Plug-in is deprecated in the Icehouse release and won't be supported in the Juno release. The features in the plug-in are now part of the ML2 plug-in in the form of Mellanox mechanism driver.
For details regarding Mellanox Neutron plugin, please refer to https://wiki.openstack.org/wiki/Mellanox-Neutron-Havana-Redhat.
Mellanox Nova VIF Driver
The Mellanox Nova VIF driver should be used when running Mellanox Mechnism Driver. The VIF driver supports the VIF plugin by binding vNIC of type DIRECT to the embedded switch port. VIF Driver for MACVTAP type is included in Nova libvirt generic vif driver. For SR-IOV pass-through (vnic type DIRECT) one needs to use VIF driver from Mellanox git repository or RPM.
Prerequisites
- A running OpenStack environment installed with the ML2 plugin on top of OVS.
- All nodes equipped with Mellanox ConnectX®-3 Network Adapter (http://www.mellanox.com/page/products_dyn?product_family=119)
- Mellanox OFED 2.2 or greater installed on all nodes. Please refer to Mellanox website for the latest OFED: http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers
- SR-IOV enabled on all compute nodes. For more information, please refer to Mellanox Community click here.
- The software package iproute2 - (http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 ) installed on all Compute nodes
- VLANs configured on the ports in the switch.
InfiniBand Network
The Mellanox Neutron Plugin use InfiniBand Partitions (PKeys) to separate Networks.
SM Node
OpenSM Configuration – Without UFM
To configure OpenSM: 1. Make sure that all the PKeys are predefined in the partitions.conf file (/etc/opensm/partitions.conf). 2. Add/Change the following in the partitions.conf file:
#management=0xffff,ipoib, sl=0, defmember=both: ALL, ALL_SWITCHES=full,SELF=full;
3. For every network you want to configure in Neutron, you have to configure the PKey associated with the VLAN of this network (defined in Neutron): vlan1=0x1, ipoib, sl=0, defmember=full : ALL; Below is an example of the configuration of the partitions.conf file in case you have 10 VLANs defined in the configuration of the file: /etc/neutron/plugins/mlnx/mlnx_conf.ini. [MLNX] network_vlan_ranges = default:1:10 management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
vlan1=0x1, ipoib, sl=0, defmember=full: ALL_CAS; vlan2=0x2, ipoib, sl=0, defmember=full: ALL_CAS; vlan3=0x3, ipoib, sl=0, defmember=full: ALL_CAS; vlan4=0x4, ipoib, sl=0, defmember=full: ALL_CAS; vlan5=0x5, ipoib, sl=0, defmember=full: ALL_CAS; vlan6=0x6, ipoib, sl=0, defmember=full: ALL_CAS; vlan7=0x7, ipoib, sl=0, defmember=full: ALL_CAS; vlan8=0x8, ipoib, sl=0, defmember=full: ALL_CAS; vlan9=0x9, ipoib, sl=0, defmember=full: ALL_CAS; vlan10=0xa, ipoib, sl=0, defmember=full: ALL_CAS; 4. Modify the following line in the file /etc/opensm/opensm.conf from FALSE to TRUE: allow_both_pkeys TRUE 5. Restart the OpenSM:
- service opensmd restart
OpenSM configuration - With UFM
1. Make sure UFM is installed and connected to your fabric.
2. Edit /opt/ufm/conf/opensm/opensm.conf and change the following values:
a) set allow_both_pkeys to TRUE (by default allow_both_pkeys are FALSE):
#Allow both full and limited membership on the same partition allow_both_pkeys TRUE
b) set sm_assign_guid_func to uniq_count (by default sm_assign_guid_func is base_port)
#SM assigned Alias GUIDs algorithm sm_assign_guid_func uniq_count
3. Edit UFM user extension partitions.conf for overriding default partitioning configuration.
a) Edit file: /opt/ufm/conf/partitions.conf.user_ext (it should be empty after UFM fresh installation)
b) add the following line to file which enable both full and limited management pkey:
management=0xffff,ipoib, sl=0, defmember=both : ALL, ALL_SWITCHES=full,SELF=full;
c) Add the additional pkeys definitions which are relevant to the specific setup - for example:
vlan1=0x1, ipoib, sl=0, defmember=full : ALL_CAS; vlan2=0x2, ipoib, sl=0, defmember=full : ALL_CAS; vlan3=0x3, ipoib, sl=0, defmember=full : ALL_CAS; vlan4=0x4, ipoib, sl=0, defmember=full : ALL_CAS; vlan5=0x5, ipoib, sl=0, defmember=full : ALL_CAS; vlan6=0x6, ipoib, sl=0, defmember=full : ALL_CAS; vlan7=0x7, ipoib, sl=0, defmember=full : ALL_CAS; vlan8=0x8, ipoib, sl=0, defmember=full : ALL_CAS; vlan9=0x9, ipoib, sl=0, defmember=full : ALL_CAS; vlan10=0xa, ipoib, sl=0, defmember=full : ALL_CAS; vlan10=0xb, ipoib, sl=0, defmember=full : ALL_CAS; vlan10=0xc, ipoib, sl=0, defmember=full : ALL_CAS; vlan10=0xd, ipoib, sl=0, defmember=full : ALL_CAS; vlan10=0xe, ipoib, sl=0, defmember=full : ALL_CAS; vlan10=0xf, ipoib, sl=0, defmember=full : ALL_CAS; vlan10=0x10, ipoib, sl=0, defmember=full : ALL_CAS; vlan10=0x11, ipoib, sl=0, defmember=full : ALL_CAS; vlan10=0x12, ipoib, sl=0, defmember=full : ALL_CAS;
4. Restart UFM
Stand-alone
#/etc/init.d/ufmd restart
High-availability
#/etc/init.d/ufmha restart
Neutron Server Node
We are using the linuxbridge mechanism driver so we can use the DHCP Server with the Linux Bridge interface driver.
Edit /etc/neutron/plugins/ml2/ml2_conf.ini as follows (The VLAN range is an example)
[ml2] type_drivers = vlan,flat tenant_network_types = vlan mechanism_drivers = linuxbridge,mlnx [ml2_type_vlan] network_vlan_ranges=default:2:10 [securitygroup] enable_security_group = True [eswitch] vnic_type = hostdev apply_profile_patch = True
The mapping between VLAN and PKEY is as follows: VLAN X = PKEY 0x8000 + X. For example: vlan 2 is pkey 0x8002
Compute Nodes
Installation
1. Download Mellanox OpenStack repo file download it :
#wget -O /etc/yum.repos.d/mlnx-icehouse.repo http://www.mellanox.com/downloads/solutions/openstack/icehouse/repo/mlnx-icehouse/mlnx-icehouse.repo
2. Install the eswitchd RPM:
#yum install eswitchd
3. In case you would like to use Ethernet in para-virtualized mode the VIF driver is already included in Nova package. Otherwise, Install Mellanox VIF driver (Make sure nova is installed on your server)
#yum install mlnxvif
4. Install the required RPM for the Neutron agent:
#yum install openstack-neutron-mellanox
Configuration
Create the file /etc/modprobe.d/mlx4_ib.conf and put:
options mlx4_ib sm_guid_assign=0
In The file /etc/neutron/plugins/mlnx/mlnx_conf.ini
physical_interface_mapping = default:autoib
Tenant_network_type , vnic_type and network_vlan_ranges parameters should be configured as the controller.
autoib can be replaced by the name of the PF.
Change the file /etc/eswitchd/eswitchd.conf
fabrics = default:autoib (or default:ib0)
The driver should be restarted
#service openibd restart
eswitchd should be started and then Neutron agent should be started
#service eswitchd restart #service neutron-mlnx-agent restart
Verify Mellanox VIF driver is configured in /etc/nova/nova.conf
[libvirt] vif_driver=mlnxvif.vif.MlxEthVIFDriver
Restart nova
#service openstack-nova-compute restart
Start Services
1. Restart Nova.
#service openstack-nova-compute restart
2. Start eswitch Daemon
#service eswitchd start
3. Start the Neutron agent
#service neutron-mlnx-agent start
Note: eswitch Daemon should be running before the Neutron Agent is started.
Network Node
Here we use the Linux Bridge plugin.
eIPoIB module should be up and configured.In /etc/infiniband/openib.conf:
E_IPOIB_LOAD=yes
And restart openibd:
#service openibd restart
Please refer to eIPoIB configuration in Mellanox OFED User Manual Once we have the eIPoIB interface, we use it in the linux bridge agent configuration:
For Example: Assuming eth1 is the eIPoIB interface:
To check that the interface type is eIPoIB run the command (verify that the driver is "eth_ipoib")
#ethtool -i <interface> driver: eth_ipoib version: 1.0.0 firmware-version: 1 bus-info: ib0 supports-statistics: yes supports-test: no supports-eeprom-access: no supports-register-dump: no supports-priv-flags: no
Change the file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
[linux_bridge] physical_interface_mappings = default:eth1
Restart neutron-linuxbridge-agent and neutron-dhcp-agent
#service neutron-linuxbridge-agent restart
DHCP Server
For DHCP support – The Network node should use the Mellanox Dnsmasq driver as the DHCP driver.
# wget http://www.mellanox.com/downloads/solutions/openstack/icehouse/repo/mlnx-icehouse/mlnx-dnsmasq-2014.1.1-1.noarch.rpm # yum localinstall mlnx-dnsmasq-2014.1.1-1.noarch.rpm
In addition, dnsmasq must be upgraded to version 2.66 or higher.
#wget ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/kalyaka/CentOS_CentOS-6/x86_64/dnsmasq-2.66-3.1.x86_64.rpm #yum localinstall dnsmasq-2.66-3.1.x86_64.rpm
Change the following in /etc/neutron/dhcp_agent.ini
dhcp_driver = mlnx_dhcp.MlnxDnsmasq
Start dhcp server
# service neutron-dhcp-agent restart
Usage Examples
- In order to create SR-IOV interface refer to Mellanox OpenStack solution document "Creating an SR-IOV Instance" chapter
- In order to create Para-Virtualized interface refer to Mellanox OpenStack solution document "Creating a Para-Virtualized vNIC Instance" chapter
Known issues and Troubleshooting
For known issues and troubleshooting options refer to Mellanox OpenStack Troubleshooting.
References
1. http://www.mellanox.com/openstack/
4. Mellanox OpenStack Solution Reference Architecture
5. Mellanox OpenStack Troubleshooting
For more details, please refer your question to openstack@mellanox.com
Return to Mellanox-OpenStack wiki page.