Jump to: navigation, search

Difference between revisions of "Mellanox-Neutron-Juno-Redhat-Ethernet"

(Created page with " = Overview = == Mellanox Neutron ML2 Driver == Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API. This driver supports Mellanox embedded swit...")
 
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
 +
=Ethernet Configuration with Mellanox=
 +
=SR-IOV Networking in OpenStack Juno=
 +
OpenStack Juno added inbox support to request VM access to virtual network via SR-IOV NIC. With the introduction of SR-IOV based NICs, the traditional virtual bridge is no longer required. Each SR-IOV port is associated with a virtual function (VF). SR-IOV ports may be provided by Hardware-based Virtual Ethernet Bridging (HW VEB); or they may be extended to an upstream physical switch (IEEE 802.1br).
 +
There are two ways that SR-IOV port may be connected:
 +
* directly connected to its VF
 +
* connected with a macvtap device that resides on the host, which is then connected to the corresponding VF
  
 
+
==Configuration==
= Overview =  
+
[[SR-IOV-Passthrough-For-Networking|Configure SR-IOV]]
== Mellanox Neutron ML2 Driver ==
 
Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.
 
 
 
This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.
 
 
 
Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) and MACVTAP (virtual interface with a tap-like software interface) vnic types. For vnic type configuration API details, please refer to configuration reference guide (click [http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html here]). Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).
 
 
 
The driver supports VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.
 
 
• Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.
 
 
 
• Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.
 
 
 
== Mellanox Neutron Plugin ==
 
Please note,
 
Mellanox Plug-in is deprecated in the Icehouse release and won't be supported in the Juno release.
 
The features in the plug-in are now part of the ML2 plug-in in the form of Mellanox mechanism driver.
 
 
 
For details regarding Mellanox Neutron plugin, please refer to https://wiki.openstack.org/wiki/Mellanox-Neutron-Havana-Redhat.
 
 
 
==  Mellanox Nova VIF Driver ==
 
The Mellanox Nova VIF driver should be used when running Mellanox Mechnism Driver. The VIF driver supports the VIF plugin by binding vNIC of type DIRECT  to the embedded switch port.
 
VIF Driver for MACVTAP type is included in Nova libvirt generic vif driver. For SR-IOV pass-through (vnic type DIRECT) one needs to use VIF driver from Mellanox git repository or RPM.
 
 
 
== Prerequisites ==
 
* A running OpenStack environment  installed with the ML2 plugin on top of OVS.
 
* All nodes equipped with Mellanox ConnectX®-3 Network Adapter (http://www.mellanox.com/page/products_dyn?product_family=119)
 
* Mellanox OFED 2.2 or greater installed on all nodes. Please refer to Mellanox website for the latest OFED: http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers
 
* SR-IOV enabled on all compute nodes. For more information, please refer to Mellanox Community [http://community.mellanox.com/docs/DOC-1317 click here].
 
* The software package iproute2 - (http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 ) installed on all Compute nodes
 
* VLANs configured on the ports in the switch.
 
 
 
= Ethernet Network =
 
== Neutron Server Node ==
 
 
 
1. Make sure ML2 plugin is the current Neutron plugin by checking core_plugin option in /etc/neutron/neutron.conf:
 
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
 
 
 
2. Make sure /etc/neutron/plugin.ini is pointing at (symbolic link) /etc/neutron/plugins/ml2/ml2_conf.ini
 
 
 
3. Modify /etc/neutron/plugins/ml2/ml2_conf.ini and include the following:
 
 
 
[ml2]
 
type_drivers = vlan,flat
 
tenant_network_types = vlan
 
mechanism_drivers = openvswitch,mlnx
 
[ml2_type_vlan]
 
network_vlan_ranges = default:2:100
 
[eswitch]
 
vnic_type = hostdev
 
apply_profile_patch = True
 
 
 
4. Start (or restart) the Neutron server:
 
    #service neutron-server restart
 
 
 
==  Compute Node ==
 
To configure the Compute node:
 
 
 
1. Download the following Mellanox OpenStack repo file:
 
    #wget http://www.mellanox.com/downloads/solutions/openstack/icehouse/repo/mlnx-icehouse/mlnx-icehouse.repo -O /etc/yum.repos.d/mlnx-icehouse.repo
 
2. Install the eSwitch Daemon (eSwitchd) RPM:
 
  #yum install eswitchd
 
3. Install Mellanox VIF driver:
 
    #yum install mlnxvif
 
4. Install the required RPM for the Neutron agent:
 
    #yum install openstack-neutron-mellanox
 
5. Configure the eSwitch fabrics parameter in  /etc/eSwitchd/eSwitchd.conf:
 
    fabrics='<network name as in ml2>:<interface>'
 
6. In /etc/nova/nova.conf, check that the compute driver is libvirt:
 
    [libvirt]
 
    vif_driver=mlnxvif.vif.MlxEthVIFDriver
 
7. Modify the /etc/neutron/plugins/mlnx/mlnx_conf.ini file to reflect your environment:
 
    [AGENT]
 
    polling_interval - Polling interval (in seconds)for existing vNICs. The default is 2 seconds.
 
    rpc_support_old_agents - must be set to 'True'
 
    [ESWITCH]
 
    physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that
 
    physical network. The format of this paramter is: 
 
    <fabric name>:< PF name> (Only relevant on Compute node). PF Name can either be the PF (Physical Function) Name or 'autoeth' for automatic Ethernet configuration,'autoib' for 
 
    automatic InfiniBand configuration. The default is "default:autoeth". 
 
8. Restart Nova :
 
    service openstack-nova-compute restart
 
9. Start eSwitch Daemon (eSwitchd):
 
    service eswitchd start
 
10. Start the Neutron agent:
 
    #service neutron-mlnx-agent start
 
 
 
NOTE: eSwitch Daemon should be running before the Neutron
 
 
 
== Network Node ==
 
 
 
To configure the Network node:
 
 
 
1. Change the configuration of the ini file located at: /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini.
 
The "default in the following example is the name of the physical network as configured in /etc/neutron/plugins/ml2/ml2_conf.ini:
 
    bridge_mappings = default:br-eth3,public:br-ex
 
2. Update /etc/neutron/dhcp_agent.ini in the DHCP server:
 
    interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
 
For additional information, please refer to the following link:
 
http://docs.openstack.org/trunk/openstack-network/admin/content/adv_cfg_dhcp_agent.html
 
 
 
3. Start the DHCP server:
 
    #service neutron-openvswitch-agent start
 
    #service neutron-dhcp-agent start
 
 
 
4. Configure the L3 agent configuration file (/etc/neutron/l3_agent.ini):
 
    #gateway_external_network_id = d4fdfebb-e027-4acd-bed4-1d96e896f336
 
      router_id = 41bf1aa0-3daf-4f51-9d23-0a4b15020c36
 
      interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
 
      external_network_bridge = br-ex
 
 
 
NOTE: The above is an example for configuring one router for tenants. Your values for gateway_external_network_id, router_id, and external_network_bridge may differ.
 
 
 
5. Start the L3 agent:
 
    #service neutron-l3-agent restart
 
 
 
= Known issues and Troubleshooting =
 
 
 
For known issues and troubleshooting options refer to  [http://community.mellanox.com/docs/DOC-1127 Mellanox OpenStack Troubleshooting].
 
 
 
= References =
 
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]
 
 
 
2. [https://github.com/mellanox-openstack Source repository]
 
 
 
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]
 
 
 
4. [http://www.mellanox.com/openstack/pdf/mellanox-openstack-solution.pdf Mellanox OpenStack Solution Reference Architecture]
 
 
 
5. [http://community.mellanox.com/docs/DOC-1127 Mellanox OpenStack Troubleshooting]
 
 
 
For more details, please refer your question to  [mailto:openstack@mellanox.com openstack@mellanox.com]
 
 
 
Return to [https://wiki.openstack.org/wiki/Mellanox-OpenStack  Mellanox-OpenStack] wiki page.
 

Latest revision as of 07:57, 22 July 2015

Ethernet Configuration with Mellanox

SR-IOV Networking in OpenStack Juno

OpenStack Juno added inbox support to request VM access to virtual network via SR-IOV NIC. With the introduction of SR-IOV based NICs, the traditional virtual bridge is no longer required. Each SR-IOV port is associated with a virtual function (VF). SR-IOV ports may be provided by Hardware-based Virtual Ethernet Bridging (HW VEB); or they may be extended to an upstream physical switch (IEEE 802.1br). There are two ways that SR-IOV port may be connected:

  • directly connected to its VF
  • connected with a macvtap device that resides on the host, which is then connected to the corresponding VF

Configuration

Configure SR-IOV