Jump to: navigation, search

Difference between revisions of "Mellanox-Neutron-Icehouse-Redhat-Ethernet"

 
(34 intermediate revisions by the same user not shown)
Line 27: Line 27:
  
 
== Prerequisites ==
 
== Prerequisites ==
 
+
* A running OpenStack environment  installed with the ML2 plugin on top of OVS.
 
+
* All nodes equipped with Mellanox ConnectX®-3 Network Adapter (http://www.mellanox.com/page/products_dyn?product_family=119)  
'''Installation with Red Hat Enterprise Linux OpenStack Platform'''
+
* Mellanox OFED 2.2 or greater installed on all nodes. Please refer to Mellanox website for the latest OFED: http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers
 
+
* SR-IOV enabled on all compute nodes. For more information, please refer to Mellanox Community [http://community.mellanox.com/docs/DOC-1317 click here].
Make sure you follow Red Hat prerequisites as required by Red Hat Enterprise Linux OpenStack Platform. Please refer to https://access.redhat.com/products/Cloud/OpenStack/ for additional information regarding this product.
+
* The software package iproute2 - (http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 ) installed on all Compute nodes
 
+
* VLANs configured on the ports in the switch.
It is assumed that OpenStack is installed with ML2 plugin.
 
 
 
You can do it by using packstack:
 
 
 
1.Create an answer file
 
    #packstack --gen-answer-file=GEN_ANSWER_FILE
 
 
 
2. Change the following in  GEN_ANSWER_FILE
 
    CONFIG_NEUTRON_L2_PLUGIN=ml2
 
    CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vlan
 
    CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan
 
    CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
 
    CONFIG_NEUTRON_ML2_VLAN_RANGES=default:2:10
 
    CONFIG_NEUTRON_L2_AGENT=openvswitch
 
 
 
3. Run packstack
 
    #packstack --answer-file=GEN_ANSWER_FILE
 
 
 
Other Red Hat references:
 
 
 
1. [http://www.redhat.com/resourcelibrary/reference-architectures/deploying-and-using-red-hat-openstack-rhos-2-dot-1 RHOS reference document]
 
 
 
2. [http://openstack.redhat.com/Quickstart RDO QuickStart]
 
 
 
 
 
'''Neutron server'''
 
 
 
No special prerequisites needed.
 
 
 
 
 
'''Compute Nodes'''
 
 
 
1. python-pip (use "yum install python-pip")
 
 
 
2. Compute nodes should be equipped with Mellanox ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])
 
 
 
3. Mellanox OFED 2.2 or greater is installed. Refer to Mellanox website for the latest OFED ([http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers])
 
 
 
4. Enable SR-IOV on ConnectX-3 card. Refer to [http://community.mellanox.com/docs/DOC-1317 Mellanox Community]
 
 
 
5. The software package iproute2 - ([https://www.kernel.org/pub/linux/utils/net/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation]) must be installed. Required only to be installed on compute nodes
 
 
 
6. oslo.config (use "pip-python install oslo.config"). Reuired only to be installed on compute nodes.
 
 
 
 
 
'''Network Node'''
 
 
 
1. Network node should be equipped with Mellanox ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])
 
 
 
2. Mellanox OFED 2.2 or greater is installed. Refer to Mellanox website for the latest OFED ([http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers])
 
  
 
= Ethernet Network =
 
= Ethernet Network =
 
== Neutron Server Node ==
 
== Neutron Server Node ==
  
=== Installation  ===
 
 
1. Make sure ML2 plugin is the current Neutron plugin by checking core_plugin option in /etc/neutron/neutron.conf:
 
1. Make sure ML2 plugin is the current Neutron plugin by checking core_plugin option in /etc/neutron/neutron.conf:
  core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
+
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
  
 
2. Make sure /etc/neutron/plugin.ini is pointing at (symbolic link) /etc/neutron/plugins/ml2/ml2_conf.ini
 
2. Make sure /etc/neutron/plugin.ini is pointing at (symbolic link) /etc/neutron/plugins/ml2/ml2_conf.ini
  
 
3. Modify /etc/neutron/plugins/ml2/ml2_conf.ini and include the following:  
 
3. Modify /etc/neutron/plugins/ml2/ml2_conf.ini and include the following:  
[ml2]
+
 
 +
[ml2]
 
  type_drivers = vlan,flat
 
  type_drivers = vlan,flat
 
  tenant_network_types = vlan
 
  tenant_network_types = vlan
Line 104: Line 54:
 
  apply_profile_patch = True
 
  apply_profile_patch = True
  
Click [http://docs.openstack.org/trunk/config-reference/content/networking-options-plugins-ml2.html here] for ML2 configuration options.
+
4. Start (or restart) the Neutron server:
 
+
    #service neutron-server restart
=== Start Services ===
 
Start (or restart) the Neutron server
 
  #service neutron-server restart
 
 
 
==  Compute Nodes ==
 
 
 
=== Installation ===
 
1. Download Mellanox OpenStack repo file download it :
 
#wget -O /etc/yum.repos.d/mlnx-icehouse.repo  http://www.mellanox.com/downloads/solutions/openstack/icehouse/repo/mlnx-icehouse/mlnx-icehouse.repo
 
 
 
2. Install the eswitchd RPM:
 
#yum install eswitchd
 
  
3. In case you would like to use Ethernet in para-virtualized mode the VIF driver is already included in Nova package. Otherwise, Install Mellanox VIF driver (Make sure nova is installed on your server)
+
==  Compute Node ==
#yum install mlnxvif
+
To configure the Compute node:
  
 +
1. Download the following Mellanox OpenStack repo file:
 +
    #wget http://www.mellanox.com/downloads/solutions/openstack/icehouse/repo/mlnx-icehouse/mlnx-icehouse.repo -O /etc/yum.repos.d/mlnx-icehouse.repo
 +
2. Install the eSwitch Daemon (eSwitchd) RPM:
 +
  #yum install eswitchd
 +
3. Install Mellanox VIF driver:
 +
    #yum install mlnxvif
 
4. Install the required RPM for the Neutron agent:
 
4. Install the required RPM for the Neutron agent:
#yum install openstack-neutron-mellanox
+
    #yum install openstack-neutron-mellanox
 
+
5. Configure the eSwitch fabrics parameter in  /etc/eSwitchd/eSwitchd.conf:
=== Configuration ===
+
    fabrics='<network name as in ml2>:<interface>'
 
+
6. In /etc/nova/nova.conf, check that the compute driver is libvirt:
1. Configure /etc/eswitchd/eswitchd.conf if needed
 
 
 
Please Refer to Mellanox Community for the eSwitchd installation notes ( click  [http://community.mellanox.com/docs/DOC-1126 here])
 
 
 
2. In /etc/nova/nova.conf
 
 
 
Check that the compute driver is libvirt:
 
    [DEFAULT]
 
    compute_driver=libvirt.LibvirtDriver
 
 
 
Change the VIF driver:
 
 
     [libvirt]
 
     [libvirt]
 
     vif_driver=mlnxvif.vif.MlxEthVIFDriver
 
     vif_driver=mlnxvif.vif.MlxEthVIFDriver
 +
7. Modify the /etc/neutron/plugins/mlnx/mlnx_conf.ini file to reflect your environment:
 +
    [AGENT]
 +
    polling_interval - Polling interval (in seconds)for existing vNICs. The default is 2 seconds.
 +
    rpc_support_old_agents - must be set to 'True'
 +
    [ESWITCH]
 +
    physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that
 +
    physical network. The format of this paramter is: 
 +
    <fabric name>:< PF name> (Only relevant on Compute node). PF Name can either be the PF (Physical Function) Name or 'autoeth' for automatic Ethernet configuration,'autoib' for 
 +
    automatic InfiniBand configuration. The default is "default:autoeth". 
 +
8. Restart Nova :
 +
    service openstack-nova-compute restart
 +
9. Start eSwitch Daemon (eSwitchd):
 +
    service eswitchd start
 +
10. Start the Neutron agent:
 +
    #service neutron-mlnx-agent start
  
In case you didn't install the Mellanox VIF driver, and you plan to use Ethernet only in paravirtualized mode make sure vif driver is as follows:
+
NOTE: eSwitch Daemon should be running before the Neutron
    vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
 
 
 
3. Modify the /etc/neutron/plugins/mlnx/mlnx_conf.ini file to reflect your environment.
 
 
 
  [AGENT]
 
  '''polling_interval''' - Polling interval (in seconds) for existing vNICs. The default is 2 seconds.
 
  '''rpc_support_old_agents'''  - must be set to 'True'
 
 
 
  [ESWITCH]
 
  '''physical_interface_mapping''' -  the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. The format of this paramter is:    <fabric name>:< PF name> (Only relevant on Compute node). PF Name can either be the PF (Physical Function) Name or 'autoeth' for automatic Ethernet configuration,'autoib' for automatic Infiniband configuration.The default is  "default:autoeth". 
 
  '''daemon_endpoint''' - eswitch daemon end point connection (URL) (default value='tcp://127.0.0.1:60001')
 
  '''request_timeout''' - the number of milliseconds the agent will wait for response on request to daemon. (default=3000 msec)
 
 
 
For a plugin configuration file example (Icehouse), please refer to [https://github.com/openstack/neutron/blob/stable/icehouse/etc/neutron/plugins/mlnx/mlnx_conf.ini Mellanox config ini file].
 
 
 
=== Start Services  ===
 
 
 
1. Restart Nova.
 
  #service openstack-nova-compute restart
 
 
 
2. Start eswitch Daemon
 
    #service eswitchd start
 
 
 
3. Start the Neutron agent
 
  #service neutron-mlnx-agent start
 
 
 
Note: eswitch Daemon should be running before the Neutron Agent is started.
 
  
 
== Network Node ==
 
== Network Node ==
  
1. Install Neutron Open vSwitch Agent, Neutron dhcp Agent and l3 Agent:
+
To configure the Network node:  
 
 
  #yum install openstack-neutron-openvswitch
 
 
 
2. Change the following configuration of the ini file (/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini). The "default" in the example is the name of the physical network as configured in /etc/neutron/plugins/ml2/ml2_conf.ini  .
 
 
 
  bridge_mappings = default:br-eth3,public:br-ex
 
(Here public is the physical external network)
 
 
 
3. Configure the DHCP server according to the following guidelines
 
 
 
Update the following file: /etc/neutron/dhcp_agent.ini with:
 
  interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
 
  
For additional information refer to:
+
1. Change the configuration of the ini file located at: /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini.
 +
The "default in the following example is the name of the physical network as configured in /etc/neutron/plugins/ml2/ml2_conf.ini:
 +
    bridge_mappings = default:br-eth3,public:br-ex
 +
2. Update /etc/neutron/dhcp_agent.ini in the DHCP server:
 +
    interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
 +
For additional information, please refer to the following link:  
 
http://docs.openstack.org/trunk/openstack-network/admin/content/adv_cfg_dhcp_agent.html
 
http://docs.openstack.org/trunk/openstack-network/admin/content/adv_cfg_dhcp_agent.html
  
4. Start the DHCP server
+
3. Start the DHCP server:
  #service neutron-openvswitch-agent start
+
    #service neutron-openvswitch-agent start
  #service neutron-dhcp-agent start
+
    #service neutron-dhcp-agent start
 
 
5. Configure the l3 agent configuration file /etc/neutron/l3_agent.ini
 
 
 
    gateway_external_network_id = d4fdfebb-e027-4acd-bed4-1d96e896f336
 
    router_id = 41bf1aa0-3daf-4f51-9d23-0a4b15020c36
 
    interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
 
    external_network_bridge = br-ex
 
 
 
The above is an example for configuring one router for tenants. Your values for gateway_external_network_id,router_id  and external_network_bridge  may differ.
 
 
 
6. Start the l3 Agent:
 
#service neutron-l3-agent restart
 
 
 
= InfiniBand Network =
 
The Mellanox Neutron Plugin use InfiniBand Partitions (PKeys) to separate Networks.
 
 
 
== SM Configuration  ==
 
 
 
=== OpenSM configuration - Without UFM===
 
 
 
All the PKeys should be predefined in the partitions.conf file (/etc/opensm/partitions.conf)
 
(Automatic cofiguration is planned in future phase)
 
 
 
Add/Change the following in the partitions.conf file
 
  management=0xffff,ipoib, sl=0, defmember=both : ALL, ALL_SWITCHES=full,SELF=full;
 
 
 
For every network you want to configure in Neutron you have to configure the pkey associated with the VLAN of this network (defined in Neutron).
 
  vlan1=0x1, ipoib, sl=0, defmember=full : ALL;
 
 
 
For example:
 
If we have 10 VLANs defined in configuration in /etc/neutron/plugins/mlnx/mlnx_conf.ini
 
 
 
  [MLNX]
 
  network_vlan_ranges = default:1:10
 
 
 
We'll have the following configuration of the the partitions.conf file:
 
 
  management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
 
  vlan1=0x1, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan2=0x2, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan3=0x3, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan4=0x4, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan5=0x5, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan6=0x6, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan7=0x7, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan8=0x8, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan9=0x9, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan10=0xa, ipoib, sl=0, defmember=full : ALL_CAS;
 
 
 
Change the following in /etc/opensm/opensm.conf:
 
  allow_both_pkeys TRUE
 
 
 
 
 
Restart the openSM
 
  #service opensmd restart
 
 
 
=== OpenSM configuration - With UFM===
 
 
 
1. Make sure UFM is installed and connected to your fabric.
 
 
 
2. Edit /opt/ufm/conf/opensm/opensm.conf and change the following values:
 
 
 
a) set allow_both_pkeys to TRUE (by default allow_both_pkeys are FALSE):
 
 
 
  #Allow both full and limited membership on the same partition
 
  allow_both_pkeys TRUE
 
 
 
b) set sm_assign_guid_func to uniq_count (by default sm_assign_guid_func is base_port)
 
 
 
  #SM assigned Alias GUIDs algorithm
 
  sm_assign_guid_func uniq_count
 
 
 
3. Edit UFM user extension partitions.conf for overriding default partitioning configuration.
 
 
 
a) Edit file: /opt/ufm/conf/partitions.conf.user_ext (it should be empty after UFM fresh installation)
 
 
 
b) add the following line to file which enable both full and limited management pkey:
 
 
 
  management=0xffff,ipoib, sl=0, defmember=both : ALL, ALL_SWITCHES=full,SELF=full;
 
 
 
c) Add the additional pkeys definitions which are relevant to the specific setup - for example:
 
 
 
  vlan1=0x1, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan2=0x2, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan3=0x3, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan4=0x4, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan5=0x5, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan6=0x6, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan7=0x7, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan8=0x8, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan9=0x9, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan10=0xa, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan10=0xb, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan10=0xc, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan10=0xd, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan10=0xe, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan10=0xf, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan10=0x10, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan10=0x11, ipoib, sl=0, defmember=full : ALL_CAS;
 
  vlan10=0x12, ipoib, sl=0, defmember=full : ALL_CAS;
 
 
 
4. Restart UFM
 
 
Stand-alone
 
 
 
  #/etc/init.d/ufmd restart
 
 
 
High-availability
 
 
 
  #/etc/init.d/ufmha restart
 
 
 
== Neutron Server Node==
 
We are using the linuxbridge mechanism driver so we can use the DHCP Server with the Linux Bridge interface driver.
 
 
 
Edit  /etc/neutron/plugins/ml2/ml2_conf.ini  as follows (The VLAN range is an example)
 
 
 
[ml2]
 
type_drivers = vlan,flat
 
tenant_network_types = vlan
 
mechanism_drivers = linuxbridge,mlnx
 
[ml2_type_vlan]
 
network_vlan_ranges=default:2:10
 
[securitygroup]
 
enable_security_group = True
 
[eswitch]
 
vnic_type = hostdev
 
apply_profile_patch = True
 
 
 
The mapping between VLAN and PKEY is as follows: VLAN X = PKEY 0x8000 + X. For example: vlan 2 is pkey 0x8002
 
 
 
== Compute Nodes ==
 
 
 
=== Installation ===
 
1. Download Mellanox OpenStack repo file download it :
 
#wget -O /etc/yum.repos.d/mlnx-icehouse.repo  http://www.mellanox.com/downloads/solutions/openstack/icehouse/repo/mlnx-icehouse/mlnx-icehouse.repo
 
 
 
2. Install the eswitchd RPM:
 
#yum install eswitchd
 
 
 
3. In case you would like to use Ethernet in para-virtualized mode the VIF driver is already included in Nova package. Otherwise, Install Mellanox VIF driver (Make sure nova is installed on your server)
 
#yum install mlnxvif
 
 
 
4. Install the required RPM for the Neutron agent:
 
#yum install openstack-neutron-mellanox
 
 
 
===Configuration===
 
 
 
Create the file /etc/modprobe.d/mlx4_ib.conf and put:
 
    options mlx4_ib sm_guid_assign=0
 
 
 
In  The file /etc/neutron/plugins/mlnx/mlnx_conf.ini
 
    physical_interface_mapping = default:autoib
 
 
 
Tenant_network_type , vnic_type and network_vlan_ranges parameters should be configured as the controller.
 
 
 
autoib can be replaced by the name of the PF. 
 
 
 
Change the file /etc/eswitchd/eswitchd.conf
 
  fabrics  = default:autoib (or default:ib0)
 
 
 
The driver should be restarted
 
  #service openibd restart
 
eswitchd should be started and then Neutron agent should be started
 
  #service eswitchd restart
 
  #service neutron-mlnx-agent restart
 
 
 
Verify Mellanox VIF driver is configured in /etc/nova/nova.conf
 
  [libvirt]
 
  vif_driver=mlnxvif.vif.MlxEthVIFDriver
 
 
 
Restart nova
 
  #service openstack-nova-compute restart
 
 
 
=== Start Services  ===
 
 
 
1. Restart Nova.
 
  #service openstack-nova-compute restart
 
 
 
2. Start eswitch Daemon
 
    #service eswitchd start
 
 
 
3. Start the Neutron agent
 
  #service neutron-mlnx-agent start
 
 
 
Note: eswitch Daemon should be running before the Neutron Agent is started.
 
 
 
== Network Node ==
 
 
 
Here we use the Linux Bridge plugin.
 
 
 
eIPoIB module should be up and configured.In /etc/infiniband/openib.conf:
 
 
 
E_IPOIB_LOAD=yes
 
And restart openibd:
 
 
 
#service openibd restart
 
Please refer to eIPoIB configuration in Mellanox OFED User Manual
 
Once we have the eIPoIB interface, we use it in the linux bridge agent configuration:
 
 
 
For Example: Assuming eth1 is the eIPoIB interface:
 
 
 
To check that the interface type is eIPoIB run the command (verify that the driver is "eth_ipoib")
 
 
 
  #ethtool -i <interface>
 
  driver: eth_ipoib
 
  version: 1.0.0
 
  firmware-version: 1
 
  bus-info: ib0
 
  supports-statistics: yes
 
  supports-test: no
 
  supports-eeprom-access: no
 
  supports-register-dump: no
 
  supports-priv-flags: no
 
Change the file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
 
 
 
  [linux_bridge]
 
  physical_interface_mappings =  default:eth1 
 
 
 
Restart neutron-linuxbridge-agent and neutron-dhcp-agent
 
 
 
  #service neutron-linuxbridge-agent restart
 
 
 
=== DHCP Server ===
 
 
 
For DHCP support – The Network node should  use the Mellanox Dnsmasq driver as the DHCP driver.
 
  # wget http://www.mellanox.com/downloads/solutions/openstack/icehouse/repo/mlnx-icehouse/mlnx-dnsmasq-2014.1.1-1.noarch.rpm
 
  # yum localinstall mlnx-dnsmasq-2014.1.1-1.noarch.rpm
 
 
 
In addition, dnsmasq must be upgraded to version 2.66 or higher.
 
  #wget ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/kalyaka/CentOS_CentOS-6/x86_64/dnsmasq-2.66-3.1.x86_64.rpm
 
  #yum localinstall dnsmasq-2.66-3.1.x86_64.rpm
 
  
Change the following in /etc/neutron/dhcp_agent.ini
+
4. Configure the L3 agent configuration file (/etc/neutron/l3_agent.ini):
  dhcp_driver = mlnx_dhcp.MlnxDnsmasq
+
    #gateway_external_network_id = d4fdfebb-e027-4acd-bed4-1d96e896f336
 +
      router_id = 41bf1aa0-3daf-4f51-9d23-0a4b15020c36
 +
      interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
 +
      external_network_bridge = br-ex
  
Start dhcp server
+
NOTE: The above is an example for configuring one router for tenants. Your values for gateway_external_network_id, router_id, and external_network_bridge may differ.
  # service neutron-dhcp-agent restart
 
  
= Usage Examples =
+
5. Start the L3 agent:
* In order to create SR-IOV interface refer to [http://www.mellanox.com/openstack/pdf/mellanox-openstack-solution.pdf Mellanox OpenStack solution document "Creating an SR-IOV Instance" chapter]
+
    #service neutron-l3-agent restart
* In order to create Para-Virtualized interface refer to [http://www.mellanox.com/openstack/pdf/mellanox-openstack-solution.pdf Mellanox OpenStack solution document "Creating a Para-Virtualized vNIC Instance" chapter]
 
  
 
= Known issues and Troubleshooting =
 
= Known issues and Troubleshooting =

Latest revision as of 14:46, 26 November 2014


Overview

Mellanox Neutron ML2 Driver

Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.

This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.

Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) and MACVTAP (virtual interface with a tap-like software interface) vnic types. For vnic type configuration API details, please refer to configuration reference guide (click here). Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).

The driver supports VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.

• Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.

• Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.

Mellanox Neutron Plugin

Please note, Mellanox Plug-in is deprecated in the Icehouse release and won't be supported in the Juno release. The features in the plug-in are now part of the ML2 plug-in in the form of Mellanox mechanism driver.

For details regarding Mellanox Neutron plugin, please refer to https://wiki.openstack.org/wiki/Mellanox-Neutron-Havana-Redhat.

Mellanox Nova VIF Driver

The Mellanox Nova VIF driver should be used when running Mellanox Mechnism Driver. The VIF driver supports the VIF plugin by binding vNIC of type DIRECT to the embedded switch port. VIF Driver for MACVTAP type is included in Nova libvirt generic vif driver. For SR-IOV pass-through (vnic type DIRECT) one needs to use VIF driver from Mellanox git repository or RPM.

Prerequisites

Ethernet Network

Neutron Server Node

1. Make sure ML2 plugin is the current Neutron plugin by checking core_plugin option in /etc/neutron/neutron.conf:

core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin

2. Make sure /etc/neutron/plugin.ini is pointing at (symbolic link) /etc/neutron/plugins/ml2/ml2_conf.ini

3. Modify /etc/neutron/plugins/ml2/ml2_conf.ini and include the following:

[ml2]

type_drivers = vlan,flat
tenant_network_types = vlan
mechanism_drivers = openvswitch,mlnx
[ml2_type_vlan]
network_vlan_ranges = default:2:100
[eswitch]
vnic_type = hostdev
apply_profile_patch = True

4. Start (or restart) the Neutron server:

   #service neutron-server restart

Compute Node

To configure the Compute node:

1. Download the following Mellanox OpenStack repo file:

   #wget http://www.mellanox.com/downloads/solutions/openstack/icehouse/repo/mlnx-icehouse/mlnx-icehouse.repo -O /etc/yum.repos.d/mlnx-icehouse.repo

2. Install the eSwitch Daemon (eSwitchd) RPM:

  #yum install eswitchd

3. Install Mellanox VIF driver:

   #yum install mlnxvif

4. Install the required RPM for the Neutron agent:

   #yum install openstack-neutron-mellanox

5. Configure the eSwitch fabrics parameter in /etc/eSwitchd/eSwitchd.conf:

   fabrics='<network name as in ml2>:<interface>'

6. In /etc/nova/nova.conf, check that the compute driver is libvirt:

   [libvirt]
   vif_driver=mlnxvif.vif.MlxEthVIFDriver

7. Modify the /etc/neutron/plugins/mlnx/mlnx_conf.ini file to reflect your environment:

   [AGENT]
   polling_interval - Polling interval (in seconds)for existing vNICs. The default is 2 seconds.
   rpc_support_old_agents - must be set to 'True'
   [ESWITCH]
   physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that 
   physical network. The format of this paramter is:  
   <fabric name>:< PF name> (Only relevant on Compute node). PF Name can either be the PF (Physical Function) Name or 'autoeth' for automatic Ethernet configuration,'autoib' for  
   automatic InfiniBand configuration. The default is "default:autoeth".  

8. Restart Nova :

   service openstack-nova-compute restart

9. Start eSwitch Daemon (eSwitchd):

   service eswitchd start

10. Start the Neutron agent:

   #service neutron-mlnx-agent start

NOTE: eSwitch Daemon should be running before the Neutron

Network Node

To configure the Network node:

1. Change the configuration of the ini file located at: /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. The "default in the following example is the name of the physical network as configured in /etc/neutron/plugins/ml2/ml2_conf.ini:

   bridge_mappings = default:br-eth3,public:br-ex

2. Update /etc/neutron/dhcp_agent.ini in the DHCP server:

   interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

For additional information, please refer to the following link: http://docs.openstack.org/trunk/openstack-network/admin/content/adv_cfg_dhcp_agent.html

3. Start the DHCP server:

   #service neutron-openvswitch-agent start
   #service neutron-dhcp-agent start

4. Configure the L3 agent configuration file (/etc/neutron/l3_agent.ini):

   #gateway_external_network_id = d4fdfebb-e027-4acd-bed4-1d96e896f336
     router_id = 41bf1aa0-3daf-4f51-9d23-0a4b15020c36
     interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
     external_network_bridge = br-ex

NOTE: The above is an example for configuring one router for tenants. Your values for gateway_external_network_id, router_id, and external_network_bridge may differ.

5. Start the L3 agent:

   #service neutron-l3-agent restart

Known issues and Troubleshooting

For known issues and troubleshooting options refer to Mellanox OpenStack Troubleshooting.

References

1. http://www.mellanox.com/openstack/

2. Source repository

3. Mellanox OFED

4. Mellanox OpenStack Solution Reference Architecture

5. Mellanox OpenStack Troubleshooting

For more details, please refer your question to openstack@mellanox.com

Return to Mellanox-OpenStack wiki page.