Jump to: navigation, search

Difference between revisions of "Mellanox-Neutron-ML2-Train"

Line 1: Line 1:
Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.  
+
Mellanox Provides two ML2 Mechanism Drivers that implement the ML2 Plugin Mechanism Driver API.
 +
These drivers provide functional parity with Mellanox Neutron plugin and allow Neutron to Provide networking services
 +
over Infiniband fabric and switch configurations for Ethernet fabric.
  
 +
=== mlnx_infiniband ===
 
This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA.
 
This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA.
Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.
+
The driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide.
 +
Hardware vNICs (primarily SR-IOV virtual functions) mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).
 +
mlnx_infiniband implements an L2 agent which runs on each compute node to apply Neutron port configurations.
 +
The agent applies VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.
  
Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide.
+
=== mlnx_sdn_assist===
Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).  
+
[https://community.mellanox.com/docs/DOC-2155 SDN Mechanism Driver] allows NEO to dynamically configure VLANs on the Mellanox Switches.
 +
* The driver main role for an Infiniband fabric is to perform PKEY configuration for Neutron DIRECT, DHCP, L3 ports.
 +
* The driver main role for an Ethernet fabric is to perform switch configurations (e.g VLAN configurations).
  
The driver supports VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.  
+
Further reading:
* Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.
+
* [http://www.mellanox.com/page/products_dyn?product_family=220&mtag=mellanox_neo NEO]
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.
+
* [http://www.mellanox.com/page/ethernet_switch_overview Mellanox Switches]
 +
* [https://community.mellanox.com/docs/DOC-2251 SDN Mechanism Driver]
  
 +
 +
Both drivers support VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.
  
 
'''Prerequisites'''
 
'''Prerequisites'''

Revision as of 08:07, 5 September 2019

Mellanox Provides two ML2 Mechanism Drivers that implement the ML2 Plugin Mechanism Driver API. These drivers provide functional parity with Mellanox Neutron plugin and allow Neutron to Provide networking services over Infiniband fabric and switch configurations for Ethernet fabric.

mlnx_infiniband

This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. The driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide. Hardware vNICs (primarily SR-IOV virtual functions) mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access). mlnx_infiniband implements an L2 agent which runs on each compute node to apply Neutron port configurations. The agent applies VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.

mlnx_sdn_assist

SDN Mechanism Driver allows NEO to dynamically configure VLANs on the Mellanox Switches.

  • The driver main role for an Infiniband fabric is to perform PKEY configuration for Neutron DIRECT, DHCP, L3 ports.
  • The driver main role for an Ethernet fabric is to perform switch configurations (e.g VLAN configurations).

Further reading:


Both drivers support VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.

Prerequisites

  • Mellanox ConnectX® Family device:
   ConnectX®-3/ConnectX®-3 PRO 
   ConnectX®-4/ConnectX®-4Lx
   ConnectX®-5
   ConnectX®-6


For Train release Information refer to the relevant OS as follows:

Infiniband


Ethernet(SR-IOV)