Jump to: navigation, search

Mellanox-Neutron-ML2-Train

Mellanox Provides two ML2 Mechanism Drivers that implement the ML2 Plugin Mechanism Driver API. These drivers provide functional parity with Mellanox Neutron plugin and allow Neutron to Provide networking services over Infiniband fabric and switch configurations for Ethernet fabric.

mlnx_infiniband

This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. The driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide. Hardware vNICs (primarily SR-IOV virtual functions) mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access). In addition, mlnx_infiniband implements an L2 agent which runs on each compute node to apply Neutron port configurations. The agent applies VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.

mlnx_sdn_assist

SDN Mechanism Driver Utilizes Mellanox NEO to:

  • Perform PKEY configuration for Neutron DIRECT, DHCP and L3 ports in Infiniband fabric.
  • Perform switch configurations (e.g VLAN configurations) in Ethernet fabric.


Further reading:


Both drivers support VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.

Prerequisites

  • Mellanox ConnectX® Family device:
   ConnectX®-3/ConnectX®-3 PRO 
   ConnectX®-4/ConnectX®-4Lx
   ConnectX®-5
   ConnectX®-6
  • Network nodes minimum kernel:
Distribution Kernel
CentOS 7.x 3.10.0-1062
Ubuntu 18.x 5.0.0-1020

Alternatively install Driver: Mellanox OFED 4.6-1.0.1.1 or greater

  • Compute nodes, Driver: Mellanox OFED 4.6-1.0.1.1 or greater installed on Compute

Configurations