Jump to: navigation, search

Mellanox-Neutron-Juno-Redhat-Ethernet-SRIOV

Revision as of 08:02, 29 June 2015 by Lenny Verkhovsky (talk | contribs) (Created page with " = Overview = == Mellanox Neutron ML2 Driver == Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API. This driver supports Mellanox embedded swit...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)


Overview

Mellanox Neutron ML2 Driver

Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.

This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.

Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) and MACVTAP (virtual interface with a tap-like software interface) vnic types. For vnic type configuration API details, please refer to configuration reference guide (click here). Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).

The driver supports VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.

• Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.

• Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.

Mellanox Neutron Plugin

Please note, Mellanox Plug-in is deprecated in the Icehouse release and won't be supported in the Juno release. The features in the plug-in are now part of the ML2 plug-in in the form of Mellanox mechanism driver.

For details regarding Mellanox Neutron plugin, please refer to https://wiki.openstack.org/wiki/Mellanox-Neutron-Havana-Redhat.

Mellanox Nova VIF Driver

The Mellanox Nova VIF driver should be used when running Mellanox Mechnism Driver. The VIF driver supports the VIF plugin by binding vNIC of type DIRECT to the embedded switch port. VIF Driver for MACVTAP type is included in Nova libvirt generic vif driver. For SR-IOV pass-through (vnic type DIRECT) one needs to use VIF driver from Mellanox git repository or RPM.

Prerequisites

Ethernet Network

Neutron Server Node

1. Make sure ML2 plugin is the current Neutron plugin by checking core_plugin option in /etc/neutron/neutron.conf:

core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin

2. Make sure /etc/neutron/plugin.ini is pointing at (symbolic link) /etc/neutron/plugins/ml2/ml2_conf.ini

3. Modify /etc/neutron/plugins/ml2/ml2_conf.ini and include the following:

[ml2]

type_drivers = vlan,flat
tenant_network_types = vlan
mechanism_drivers = openvswitch,mlnx
[ml2_type_vlan]
network_vlan_ranges = default:2:100
[eswitch]
vnic_type = hostdev
apply_profile_patch = True

4. Start (or restart) the Neutron server:

   #service neutron-server restart

Compute Node

To configure the Compute node:

1. Download the following Mellanox OpenStack repo file:

   #wget http://www.mellanox.com/downloads/solutions/openstack/icehouse/repo/mlnx-icehouse/mlnx-icehouse.repo -O /etc/yum.repos.d/mlnx-icehouse.repo

2. Install the eSwitch Daemon (eSwitchd) RPM:

  #yum install eswitchd

3. Install Mellanox VIF driver:

   #yum install mlnxvif

4. Install the required RPM for the Neutron agent:

   #yum install openstack-neutron-mellanox

5. Configure the eSwitch fabrics parameter in /etc/eSwitchd/eSwitchd.conf:

   fabrics='<network name as in ml2>:<interface>'

6. In /etc/nova/nova.conf, check that the compute driver is libvirt:

   [libvirt]
   vif_driver=mlnxvif.vif.MlxEthVIFDriver

7. Modify the /etc/neutron/plugins/mlnx/mlnx_conf.ini file to reflect your environment:

   [AGENT]
   polling_interval - Polling interval (in seconds)for existing vNICs. The default is 2 seconds.
   rpc_support_old_agents - must be set to 'True'
   [ESWITCH]
   physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that 
   physical network. The format of this paramter is:  
   <fabric name>:< PF name> (Only relevant on Compute node). PF Name can either be the PF (Physical Function) Name or 'autoeth' for automatic Ethernet configuration,'autoib' for  
   automatic InfiniBand configuration. The default is "default:autoeth".  

8. Restart Nova :

   service openstack-nova-compute restart

9. Start eSwitch Daemon (eSwitchd):

   service eswitchd start

10. Start the Neutron agent:

   #service neutron-mlnx-agent start

NOTE: eSwitch Daemon should be running before the Neutron

Network Node

To configure the Network node:

1. Change the configuration of the ini file located at: /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. The "default in the following example is the name of the physical network as configured in /etc/neutron/plugins/ml2/ml2_conf.ini:

   bridge_mappings = default:br-eth3,public:br-ex

2. Update /etc/neutron/dhcp_agent.ini in the DHCP server:

   interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

For additional information, please refer to the following link: http://docs.openstack.org/trunk/openstack-network/admin/content/adv_cfg_dhcp_agent.html

3. Start the DHCP server:

   #service neutron-openvswitch-agent start
   #service neutron-dhcp-agent start

4. Configure the L3 agent configuration file (/etc/neutron/l3_agent.ini):

   #gateway_external_network_id = d4fdfebb-e027-4acd-bed4-1d96e896f336
     router_id = 41bf1aa0-3daf-4f51-9d23-0a4b15020c36
     interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
     external_network_bridge = br-ex

NOTE: The above is an example for configuring one router for tenants. Your values for gateway_external_network_id, router_id, and external_network_bridge may differ.

5. Start the L3 agent:

   #service neutron-l3-agent restart

Known issues and Troubleshooting

For known issues and troubleshooting options refer to Mellanox OpenStack Troubleshooting.

References

1. http://www.mellanox.com/openstack/

2. Source repository

3. Mellanox OFED

4. Mellanox OpenStack Solution Reference Architecture

5. Mellanox OpenStack Troubleshooting

For more details, please refer your question to openstack@mellanox.com

Return to Mellanox-OpenStack wiki page.