Jump to: navigation, search

Difference between revisions of "Mellanox-Neutron-Mitaka-Redhat-InfiniBand"

(Compute Nodes)
(Neutron MLNX Agent)
 
(16 intermediate revisions by 2 users not shown)
Line 6: Line 6:
 
Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.  
 
Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.  
  
This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.  
+
This driver supports Mellanox embedded switch functionality as part of the InfiniBand HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.  
  
 
Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide.
 
Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide.
 
Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).  
 
Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).  
  
The driver supports VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.  
+
The driver supports VLAN network type to facilitate virtual networks either on InfiniBand fabrics.  
 
* Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.  
 
* Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.  
 
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.  
 
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.  
Line 22: Line 22:
 
* SR-IOV enabled on all compute nodes. For more information, please refer to Mellanox Community click [https://community.mellanox.com/docs/DOC-1317].  
 
* SR-IOV enabled on all compute nodes. For more information, please refer to Mellanox Community click [https://community.mellanox.com/docs/DOC-1317].  
 
* The software package iproute2 [http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2] installed on all Compute nodes
 
* The software package iproute2 [http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2] installed on all Compute nodes
* Add repository
+
* Add the following repositories on each node
yum -y install yum-plugin-priorities
+
Mitaka (latest)
cd /etc/yum.repos.d/
+
  sudo wget https://trunk.rdoproject.org/centos7-mitaka/current/delorean.repo -O /etc/yum.repos.d/delorean.repo
  sudo wget https://trunk.rdoproject.org/centos7/current/delorean.repo
 
  sudo wget http://trunk.rdoproject.org/centos7-mitaka/current-passed-ci/delorean.repo
 
 
  yum update -y
 
  yum update -y
  
Line 44: Line 42:
 
All the PKeys should be predefined in the partitions.conf file (/etc/opensm/partitions.conf)
 
All the PKeys should be predefined in the partitions.conf file (/etc/opensm/partitions.conf)
 
(Automatic cofiguration is planned in future phase)
 
(Automatic cofiguration is planned in future phase)
 +
 +
====For ConnectX®-3/ConnectX®-3Pro use the following configuration====
  
 
Add/Change the following in the partitions.conf file
 
Add/Change the following in the partitions.conf file
Line 57: Line 57:
 
   vlan9=0x9, ipoib, sl=0, defmember=full : ALL;
 
   vlan9=0x9, ipoib, sl=0, defmember=full : ALL;
 
   vlan10=0xa, ipoib, sl=0, defmember=full : ALL;
 
   vlan10=0xa, ipoib, sl=0, defmember=full : ALL;
 +
Change the following in /etc/opensm/opensm.conf:
 +
  allow_both_pkeys TRUE
  
 +
====For ConnectX®-4 use the following configuration====
  
For example:
+
Add/Change the following in the partitions.conf.user_ext
If we have 10 VLANs defined in configuration in /etc/neutron/plugins/mlnx/mlnx_conf.ini
+
vlan1=0x1, ipoib, sl=0, defmember=full: ALL_CAS;
 
+
#Storage and management vlan should be define as follow
  [MLNX]
+
vlan2=0x2, ipoib, sl=0, defmember=full: ALL_CAS;
  network_vlan_ranges = default:1:10
+
vlan3=0x3, ipoib, sl=0, defmember=full: ALL_CAS;
 
+
#define OpenSM as a member for all OpenStack vlans. If not guest will have link down on “ibdev2netdev” and no connectivity.
 
+
vlan4=0x4, ipoib, sl=0, defmember=full: SELF;
 +
vlan5=0x5, ipoib, sl=0, defmember=full: SELF;
 +
vlan6=0x6, ipoib, sl=0, defmember=full: SELF;
 +
vlan7=0x7, ipoib, sl=0, defmember=full: SELF;
 +
vlan8=0x8, ipoib, sl=0, defmember=full: SELF;
 +
vlan9=0x9, ipoib, sl=0, defmember=full: SELF;
 +
vlan10=0xa, ipoib, sl=0, defmember=full: SELF;
 +
 
Change the following in /etc/opensm/opensm.conf:
 
Change the following in /etc/opensm/opensm.conf:
  allow_both_pkeys TRUE
+
virt_enabled 2
 +
no_partition_enforcement TRUE
 +
part_enforce off
 +
allow_both_pkeys FALSE
  
 
4. Restart the OpenSM:  
 
4. Restart the OpenSM:  
Line 76: Line 89:
  
 
1. Install packstack
 
1. Install packstack
yum install -y openstack-packstack
+
    # yum install -y openstack-packstack.noarch
 
+
    Install Mellanox RPM on Controller Node
 +
    # yum install -y --nogpgcheck python-networking-mlnx
 
2. Modify answer file
 
2. Modify answer file
 
  packstack --provision-demo=n --nagios-install=n --os-swift-install=n --os-ceilometer-install=n --os-neutron-ml2-type-drivers=flat,vlan --os-neutron-ml2-tenant-network-types=vlan --os-neutron-ml2-mechanism-drivers=openvswitch,mlnx --os-neutron-ml2-vlan-ranges=default:31:40 --use-epel=y --os-compute-hosts=10.209.24.105,10.209.24.106 --keystone-admin-passwd=admin --keystone-demo-passwd=demo --novacompute-privif=enp5s0 --novanetwork-pubif=em1 --novanetwork-privif=enp5s0 --os-neutron-ovs-bridge-mappings=default:br-enp5s0,extnet:br-ex --os-neutron-ovs-bridge-interfaces=br-enp5s0:enp5s0 --gen-answer-file=answerfile.cfg
 
  packstack --provision-demo=n --nagios-install=n --os-swift-install=n --os-ceilometer-install=n --os-neutron-ml2-type-drivers=flat,vlan --os-neutron-ml2-tenant-network-types=vlan --os-neutron-ml2-mechanism-drivers=openvswitch,mlnx --os-neutron-ml2-vlan-ranges=default:31:40 --use-epel=y --os-compute-hosts=10.209.24.105,10.209.24.106 --keystone-admin-passwd=admin --keystone-demo-passwd=demo --novacompute-privif=enp5s0 --novanetwork-pubif=em1 --novanetwork-privif=enp5s0 --os-neutron-ovs-bridge-mappings=default:br-enp5s0,extnet:br-ex --os-neutron-ovs-bridge-interfaces=br-enp5s0:enp5s0 --gen-answer-file=answerfile.cfg
Line 89: Line 103:
  
 
1. Install Mellanox RPMs:  
 
1. Install Mellanox RPMs:  
  # yum install -y --nogpgcheck openstack-neutron-mellanox python-networking-mlnx  
+
  # yum install -y --nogpgcheck python-networking-mlnx  
  
 
===Neutron Server===
 
===Neutron Server===
Line 104: Line 118:
 
  # or mechanism_drivers = mlnx, linuxbridge
 
  # or mechanism_drivers = mlnx, linuxbridge
 
  [ml2_type_vlan]
 
  [ml2_type_vlan]
  network_vlan_ranges = physnet1:1:10
+
  network_vlan_ranges = default:1:10
  
 
4. Start (or restart) the Neutron server:  
 
4. Start (or restart) the Neutron server:  
Line 130: Line 144:
 
* 3.1 OpenVswitch '''/etc/neutron/plugins/ml2/openvswitch_agent.ini'''  
 
* 3.1 OpenVswitch '''/etc/neutron/plugins/ml2/openvswitch_agent.ini'''  
 
  [ovs]
 
  [ovs]
  bridge_mappings = physnet1:br-<eIPoIB interface>
+
  bridge_mappings = default:br-<eIPoIB interface>
 
* 3.2 LinuxBridge file '''/etc/neutron/plugins/ml2/linuxbridge_agent.ini''':
 
* 3.2 LinuxBridge file '''/etc/neutron/plugins/ml2/linuxbridge_agent.ini''':
 
  [linux_bridge]  
 
  [linux_bridge]  
  physical_interface_mappings = physnet1:<eIPoIB interface>
+
  physical_interface_mappings = default:<eIPoIB interface>
  
  
Line 142: Line 156:
  
  
4. Restart network bridge and neutron-dhcp-agent:
+
4. Restart network bridge:  
systemctl restart neutron-dhcp-agent.service
 
 
* 4.1 OpenVswitch
 
* 4.1 OpenVswitch
 
  systemctl restart neutron-openvswitch-agent.service
 
  systemctl restart neutron-openvswitch-agent.service
Line 172: Line 185:
  
 
1. Install Mellanox RPMs:  
 
1. Install Mellanox RPMs:  
  # yum install --nogpgcheck -y openstack-neutron-mellanox python-networking-mlnx
+
  # yum install --nogpgcheck -y python-networking-mlnx
  
 
2. Create the file '''/etc/modprobe.d/mlx4_ib.conf''' and add the following:   
 
2. Create the file '''/etc/modprobe.d/mlx4_ib.conf''' and add the following:   
Line 185: Line 198:
 
This is done through the pci_passthrough_whitelist parameter under the default section in '''/etc/nova/nova.conf'''.
 
This is done through the pci_passthrough_whitelist parameter under the default section in '''/etc/nova/nova.conf'''.
 
For example if we want to whitelist and tag the VFs by their PCI address we would use the following setting:
 
For example if we want to whitelist and tag the VFs by their PCI address we would use the following setting:
  pci_passthrough_whitelist = {"address":"*:0a:00.*","physical_network":"physnet1"}
+
  pci_passthrough_whitelist = {"address":"*:0a:00.*","physical_network":"default"}
This associates any VF with address that includes ':0a:00.' in its address to the physical network physnet1.
+
This associates any VF with address that includes ':0a:00.' in its address to the physical network default.
  
 
1. add  pci_passthrough_whitelist to '''/etc/nova/nova.conf'''
 
1. add  pci_passthrough_whitelist to '''/etc/nova/nova.conf'''
Line 201: Line 214:
  
 
3. In the file '''/etc/neutron/plugins/mlnx/mlnx_conf.ini''', the parameters tenant_network_type , and network_vlan_ranges should be configured as the controllers:  
 
3. In the file '''/etc/neutron/plugins/mlnx/mlnx_conf.ini''', the parameters tenant_network_type , and network_vlan_ranges should be configured as the controllers:  
  physical_interface_mappings = physnet1:<ib_interface>(for example physnet1:ib0)
+
  [eswitch]
 +
  physical_interface_mappings = default:<ib_interface>(for example default:ib0)
  
 
4. Modify the file '''/etc/neutron/plugins/ml2/eswitchd.conf''' as follows:
 
4. Modify the file '''/etc/neutron/plugins/ml2/eswitchd.conf''' as follows:
  fabrics = physnet1:<ib_interface> (for example physnet1:ib0)
+
  fabrics = default:<ib_interface> (for example default:ib0)
  
 
5. Start eSwitchd:  
 
5. Start eSwitchd:  
Line 214: Line 228:
 
==Known issues and Troubleshooting==
 
==Known issues and Troubleshooting==
 
For known issues and troubleshooting options refer to [https://community.mellanox.com/docs/DOC-1127 Mellanox OpenStack Troubleshooting]
 
For known issues and troubleshooting options refer to [https://community.mellanox.com/docs/DOC-1127 Mellanox OpenStack Troubleshooting]
 +
 +
install pyzmq
 +
#pip install pyzmq

Latest revision as of 17:30, 6 December 2016

Overview

Mellanox Neutron ML2 Driver

Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.

This driver supports Mellanox embedded switch functionality as part of the InfiniBand HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.

Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide. Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).

The driver supports VLAN network type to facilitate virtual networks either on InfiniBand fabrics.

  • Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.
  • Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.

Prerequisites

  • Clean Red Hat 7.1
  • A running OpenStack environment installed with the ML2 plugin on top of OpenVswitch or Linux Bridge.
  • All nodes equipped with Mellanox | ConnectX®-3 / | ConnectX®-4 Network Adapter
  • Mellanox OFED 2.4 or greater installed on all nodes. Please refer to Mellanox website for the latest OFED: [1]
  • SR-IOV enabled on all compute nodes. For more information, please refer to Mellanox Community click [2].
  • The software package iproute2 [3] installed on all Compute nodes
  • Add the following repositories on each node

Mitaka (latest)

sudo wget https://trunk.rdoproject.org/centos7-mitaka/current/delorean.repo -O  /etc/yum.repos.d/delorean.repo
yum update -y

InfiniBand Network

The Mellanox Neutron Plugin use InfiniBand Partitions (PKeys) to separate Networks.

SM Node

OpenSM Provisioning with SDN Mechanism Driver

SDN Mechanism Driver allows OpenSM dynamically assign PKs in the IB network.

More details about applying SDN Mechanism Driver with NEO can be found here

Manual OpenSM Configuration

All the PKeys should be predefined in the partitions.conf file (/etc/opensm/partitions.conf) (Automatic cofiguration is planned in future phase)

For ConnectX®-3/ConnectX®-3Pro use the following configuration

Add/Change the following in the partitions.conf file

  management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
  vlan1=0x1, ipoib, sl=0, defmember=full : ALL;
  vlan2=0x2, ipoib, sl=0, defmember=full : ALL;
  vlan3=0x3, ipoib, sl=0, defmember=full : ALL;
  vlan4=0x4, ipoib, sl=0, defmember=full : ALL;
  vlan5=0x5, ipoib, sl=0, defmember=full : ALL;
  vlan6=0x6, ipoib, sl=0, defmember=full : ALL;
  vlan7=0x7, ipoib, sl=0, defmember=full : ALL;
  vlan8=0x8, ipoib, sl=0, defmember=full : ALL;
  vlan9=0x9, ipoib, sl=0, defmember=full : ALL;
  vlan10=0xa, ipoib, sl=0, defmember=full : ALL;

Change the following in /etc/opensm/opensm.conf:

  allow_both_pkeys TRUE

For ConnectX®-4 use the following configuration

Add/Change the following in the partitions.conf.user_ext

vlan1=0x1, ipoib, sl=0, defmember=full: ALL_CAS;
#Storage and management vlan should be define as follow
vlan2=0x2, ipoib, sl=0, defmember=full: ALL_CAS;
vlan3=0x3, ipoib, sl=0, defmember=full: ALL_CAS;
#define OpenSM as a member for all OpenStack vlans. If not guest will have link down on “ibdev2netdev” and no connectivity.
vlan4=0x4, ipoib, sl=0, defmember=full: SELF;
vlan5=0x5, ipoib, sl=0, defmember=full: SELF;
vlan6=0x6, ipoib, sl=0, defmember=full: SELF;
vlan7=0x7, ipoib, sl=0, defmember=full: SELF;
vlan8=0x8, ipoib, sl=0, defmember=full: SELF;
vlan9=0x9, ipoib, sl=0, defmember=full: SELF;
vlan10=0xa, ipoib, sl=0, defmember=full: SELF;

Change the following in /etc/opensm/opensm.conf:

virt_enabled 2
no_partition_enforcement TRUE
part_enforce off
allow_both_pkeys FALSE

4. Restart the OpenSM:

# systemctl restart opensmd.service

RDO installation

To install and configure packstack

1. Install packstack

   # yum install -y openstack-packstack.noarch
   Install Mellanox RPM on Controller Node
   # yum install -y --nogpgcheck python-networking-mlnx 

2. Modify answer file

packstack --provision-demo=n --nagios-install=n --os-swift-install=n --os-ceilometer-install=n --os-neutron-ml2-type-drivers=flat,vlan --os-neutron-ml2-tenant-network-types=vlan --os-neutron-ml2-mechanism-drivers=openvswitch,mlnx --os-neutron-ml2-vlan-ranges=default:31:40 --use-epel=y --os-compute-hosts=10.209.24.105,10.209.24.106 --keystone-admin-passwd=admin --keystone-demo-passwd=demo --novacompute-privif=enp5s0 --novanetwork-pubif=em1 --novanetwork-privif=enp5s0 --os-neutron-ovs-bridge-mappings=default:br-enp5s0,extnet:br-ex --os-neutron-ovs-bridge-interfaces=br-enp5s0:enp5s0 --gen-answer-file=answerfile.cfg

3. Run packstack

# packstack --answer-file=answerfile.cfg

Controller Node

To configure the Controller node:

1. Install Mellanox RPMs:

# yum install -y --nogpgcheck python-networking-mlnx 

Neutron Server

1. Make sure ML2 is the current Neutron plugin by checking the core_plugin parameter in /etc/neutron/neutron.conf:

core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin

2. Make sure /etc/neutron/plugin.ini is pointing to /etc/neutron/plugins/ml2/ml2_conf.ini (symbolic link)

3. Modify /etc/neutron/plugins/ml2/ml2_conf.ini by adding the following:

[ml2]
type_drivers = vlan,flat
tenant_network_types = vlan
mechanism_drivers = mlnx, openvswitch
# or mechanism_drivers = mlnx, linuxbridge
[ml2_type_vlan]
network_vlan_ranges = default:1:10

4. Start (or restart) the Neutron server:

# systemctl restart neutron-server.service

Nova Scheduler

Enabling PciPassthroughFilter modify /etc/nova/nova.conf

 scheduler_available_filters = nova.scheduler.filters.all_filters
 scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, PciPassthroughFilter

Network Node

To configure the Network node:

Prerequisites:

E_IPoIB port is configured and up

1. Make sure that eIPoIB module is up and configured in /etc/infiniband/openib.conf: For more information, please refer to eIPoIB configuration in Mellanox OFED User Manual.

E_IPOIB_LOAD=yes


2. Restart openibd:

# service openibd restart

3. Modify the network bridge configuration according to the use of OpenVswitch or LinuxBridge

  • 3.1 OpenVswitch /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs]
bridge_mappings = default:br-<eIPoIB interface>
  • 3.2 LinuxBridge file /etc/neutron/plugins/ml2/linuxbridge_agent.ini:
[linux_bridge] 
physical_interface_mappings = default:<eIPoIB interface>


NOTE: In order to obtain the eIPoIB interface name, run the ethtool tool command (see below) and check that driver name is eth_ipoib:

# ethtool -i <eIPoIB_interface> 
driver: eth_ipoib
.....


4. Restart network bridge:

  • 4.1 OpenVswitch
systemctl restart neutron-openvswitch-agent.service
  • 4.2 LinuxBridge (if using linux bridge)
# service neutron-linuxbridge-agent restart


NOTE: For DHCP support, the Network node should use the Mellanox Dnsmasq driver as the DHCP driver.

DHCP Server (Usually part of the Network node)

1. Modify /etc/neutron/dhcp_agent.ini as follows and according to OVS or Linuxbridge:

dhcp_driver = networking_mlnx.dhcp.mlnx_dhcp.MlnxDnsmasq
dhcp_broadcast_reply = True
1.1 For OVS
    interface_driver =  neutron.agent.linux.interface.OVSInterfaceDriver
1.2 For Linux Bridge
     interface_driver =  neutron.agent.linux.interface.BridgeInterfaceDriver


2. Restart DHCP server:

# systemctl restart neutron-dhcp-agent.service

Compute Nodes

To configure the Compute Node:

1. Install Mellanox RPMs:

# yum install --nogpgcheck -y python-networking-mlnx

2. Create the file /etc/modprobe.d/mlx4_ib.conf and add the following:

options mlx4_ib sm_guid_assign=0

3. Restart the driver:

# /etc/init.d/openibd restart

Nova Compute

Nova-compute needs to know which PCI devices are allowed to be passed through to the VMs. Also for SRIOV PCI devices it needs to know to which physical network the VF belongs. This is done through the pci_passthrough_whitelist parameter under the default section in /etc/nova/nova.conf. For example if we want to whitelist and tag the VFs by their PCI address we would use the following setting:

pci_passthrough_whitelist = {"address":"*:0a:00.*","physical_network":"default"}

This associates any VF with address that includes ':0a:00.' in its address to the physical network default.

1. add pci_passthrough_whitelist to /etc/nova/nova.conf

2. Restart Nova:

# systemctl restart openstack-nova-compute

Neutron MLNX Agent

1. Run:

# systemctl enable neutron-mlnx-agent.service

2. Run:

# systemctl daemon-reload

3. In the file /etc/neutron/plugins/mlnx/mlnx_conf.ini, the parameters tenant_network_type , and network_vlan_ranges should be configured as the controllers:

 [eswitch]
physical_interface_mappings = default:<ib_interface>(for example default:ib0)

4. Modify the file /etc/neutron/plugins/ml2/eswitchd.conf as follows:

fabrics = default:<ib_interface> (for example default:ib0)

5. Start eSwitchd:

# systemctl restart eswitchd

6. Start the Neutron agent:

# systemctl restart neutron-mlnx-agent

Known issues and Troubleshooting

For known issues and troubleshooting options refer to Mellanox OpenStack Troubleshooting

install pyzmq

  1. pip install pyzmq