Jump to: navigation, search

Difference between revisions of "Mellanox-Neutron-ML2-Train"

(Prerequisites)
(Prerequisites)
Line 33: Line 33:
  
  
For '''Train''' release Information refer to the relevant OS as follows:
+
===Configurations===
  
 
* [[Mellanox-Neutron-Train-Redhat-InfiniBand | Infiniband Configurations]]
 
* [[Mellanox-Neutron-Train-Redhat-InfiniBand | Infiniband Configurations]]

Revision as of 12:00, 5 September 2019

Mellanox Provides two ML2 Mechanism Drivers that implement the ML2 Plugin Mechanism Driver API. These drivers provide functional parity with Mellanox Neutron plugin and allow Neutron to Provide networking services over Infiniband fabric and switch configurations for Ethernet fabric.

mlnx_infiniband

This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. The driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide. Hardware vNICs (primarily SR-IOV virtual functions) mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access). In addition, mlnx_infiniband implements an L2 agent which runs on each compute node to apply Neutron port configurations. The agent applies VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.

mlnx_sdn_assist

SDN Mechanism Driver Utilizes Mellanox NEO to:

  • Perform PKEY configuration for Neutron DIRECT, DHCP and L3 ports in Infiniband fabric.
  • Perform switch configurations (e.g VLAN configurations) in Ethernet fabric.


Further reading:


Both drivers support VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.

Prerequisites

  • Mellanox ConnectX® Family device:
   ConnectX®-3/ConnectX®-3 PRO 
   ConnectX®-4/ConnectX®-4Lx
   ConnectX®-5
   ConnectX®-6


Configurations


Ethernet(SR-IOV)