Jump to: navigation, search

Difference between revisions of "Mellanox-Neutron-ML2-Train"

Line 7: Line 7:
 
The driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide.
 
The driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide.
 
Hardware vNICs (primarily SR-IOV virtual functions) mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).  
 
Hardware vNICs (primarily SR-IOV virtual functions) mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).  
mlnx_infiniband implements an L2 agent which runs on each compute node to apply Neutron port configurations.
+
In addition, mlnx_infiniband implements an L2 agent which runs on each compute node to apply Neutron port configurations.
 
The agent applies VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.
 
The agent applies VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.
  
Line 30: Line 30:
 
     [https://www.mellanox.com/page/products_dyn?product_family=265&mtag=connectx_6_vpi_card ConnectX®-6]
 
     [https://www.mellanox.com/page/products_dyn?product_family=265&mtag=connectx_6_vpi_card ConnectX®-6]
 
* Driver: [http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers Mellanox OFED] 4.6-1.0.1.1 or greater
 
* Driver: [http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers Mellanox OFED] 4.6-1.0.1.1 or greater
 
  
  

Revision as of 09:58, 5 September 2019

Mellanox Provides two ML2 Mechanism Drivers that implement the ML2 Plugin Mechanism Driver API. These drivers provide functional parity with Mellanox Neutron plugin and allow Neutron to Provide networking services over Infiniband fabric and switch configurations for Ethernet fabric.

mlnx_infiniband

This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. The driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide. Hardware vNICs (primarily SR-IOV virtual functions) mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access). In addition, mlnx_infiniband implements an L2 agent which runs on each compute node to apply Neutron port configurations. The agent applies VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.

mlnx_sdn_assist

SDN Mechanism Driver allows NEO to dynamically configure VLANs on the Mellanox Switches.

  • The driver main role for an Infiniband fabric is to perform PKEY configuration for Neutron DIRECT, DHCP, L3 ports.
  • The driver main role for an Ethernet fabric is to perform switch configurations (e.g VLAN configurations).

Further reading:


Both drivers support VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.

Prerequisites

  • Mellanox ConnectX® Family device:
   ConnectX®-3/ConnectX®-3 PRO 
   ConnectX®-4/ConnectX®-4Lx
   ConnectX®-5
   ConnectX®-6


For Train release Information refer to the relevant OS as follows:

Infiniband


Ethernet(SR-IOV)