Jump to: navigation, search

Difference between revisions of "Mellanox-Neutron-ML2-Train"

(Prerequisites)
 
(17 intermediate revisions by the same user not shown)
Line 1: Line 1:
Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.  
+
Mellanox Provides two ML2 Mechanism Drivers that implement the ML2 Plugin Mechanism Driver API.
 +
These drivers provide functional parity with Mellanox Neutron plugin and allow Neutron to Provide networking services
 +
over Infiniband fabric and switch configurations for Ethernet fabric.
  
 +
=== mlnx_infiniband ===
 
This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA.
 
This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA.
Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.
+
The driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide.
 +
Hardware vNICs (primarily SR-IOV virtual functions) mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).
 +
In addition, mlnx_infiniband implements an L2 agent which runs on each compute node to apply Neutron port configurations.
 +
The agent applies VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.
  
Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide.
+
=== mlnx_sdn_assist===
Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).  
+
[https://community.mellanox.com/docs/DOC-2155 SDN Mechanism Driver] Utilizes Mellanox NEO to:
 +
* Perform PKEY configuration for Neutron DIRECT, DHCP and L3 ports in Infiniband fabric.
 +
* Perform switch configurations (e.g VLAN configurations) in Ethernet fabric.
  
The driver supports VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.
 
* Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.
 
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.
 
  
 +
'''Further reading:'''
 +
* [http://www.mellanox.com/page/products_dyn?product_family=220&mtag=mellanox_neo NEO]
 +
* [http://www.mellanox.com/page/ethernet_switch_overview Mellanox Switches]
 +
* [https://community.mellanox.com/docs/DOC-2251 SDN Mechanism Driver]
  
'''Prerequisites'''
+
 
 +
Both drivers support VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.
 +
 
 +
===Prerequisites===
 
* Mellanox ConnectX® Family device:
 
* Mellanox ConnectX® Family device:
 
     [https://www.mellanox.com/page/products_dyn?product_family=119  ConnectX®-3/ConnectX®-3 PRO]  
 
     [https://www.mellanox.com/page/products_dyn?product_family=119  ConnectX®-3/ConnectX®-3 PRO]  
     [https://www.mellanox.com/page/products_dyn?product_family=201& ConnectX®-4]/[http://www.mellanox.com/page/products_dyn?product_family=214&mtag=connectx_4_lx_en_ic ConnectX®-4Lx]
+
     [https://www.mellanox.com/page/products_dyn?product_family=201& ConnectX®-4]/[https://www.mellanox.com/page/products_dyn?product_family=214&mtag=connectx_4_lx_en_ic ConnectX®-4Lx]
 
     [https://www.mellanox.com/page/products_dyn?product_family=258&mtag=connectx_5_vpi_card ConnectX®-5]
 
     [https://www.mellanox.com/page/products_dyn?product_family=258&mtag=connectx_5_vpi_card ConnectX®-5]
 
     [https://www.mellanox.com/page/products_dyn?product_family=265&mtag=connectx_6_vpi_card ConnectX®-6]
 
     [https://www.mellanox.com/page/products_dyn?product_family=265&mtag=connectx_6_vpi_card ConnectX®-6]
* Driver: [http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers Mellanox OFED] 4.6-1.0.1.1 or greater
 
  
 +
* Network nodes minimum kernel:
 +
{| class="wikitable"
 +
|-
 +
! Distribution !! Kernel
 +
|-
 +
| CentOS 7.x || 3.10.0-1062
 +
|-
 +
| Ubuntu 18.x || 5.0.0-1020
 +
|}
  
 +
Alternatively install Driver: [http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers Mellanox OFED] 4.6-1.0.1.1 or greater
  
For '''Train''' release Information refer to the relevant OS as follows:
+
* Compute nodes, Driver: [http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers Mellanox OFED] 4.6-1.0.1.1 or greater installed on Compute
  
Infiniband
+
===Configurations===
* [[Mellanox-Neutron-Train-Redhat-InfiniBand | Redhat7]]
+
* [[Mellanox-Neutron-Train-InfiniBand | Infiniband(SR-IOV)]]
* [[Mellanox-Neutron-Train-Ubuntu-InfiniBand | Ubuntu18.04]]
+
* [[Mellanox-Neutron-Train-Ethernet |Ethernet(SR-IOV)]]
<br />
 
Ethernet(SR-IOV)
 
* [[Mellanox-Neutron-Train-Ethernet |Redhat7|Ubuntu]]
 

Latest revision as of 07:17, 9 September 2019

Mellanox Provides two ML2 Mechanism Drivers that implement the ML2 Plugin Mechanism Driver API. These drivers provide functional parity with Mellanox Neutron plugin and allow Neutron to Provide networking services over Infiniband fabric and switch configurations for Ethernet fabric.

mlnx_infiniband

This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. The driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide. Hardware vNICs (primarily SR-IOV virtual functions) mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access). In addition, mlnx_infiniband implements an L2 agent which runs on each compute node to apply Neutron port configurations. The agent applies VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.

mlnx_sdn_assist

SDN Mechanism Driver Utilizes Mellanox NEO to:

  • Perform PKEY configuration for Neutron DIRECT, DHCP and L3 ports in Infiniband fabric.
  • Perform switch configurations (e.g VLAN configurations) in Ethernet fabric.


Further reading:


Both drivers support VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.

Prerequisites

  • Mellanox ConnectX® Family device:
   ConnectX®-3/ConnectX®-3 PRO 
   ConnectX®-4/ConnectX®-4Lx
   ConnectX®-5
   ConnectX®-6
  • Network nodes minimum kernel:
Distribution Kernel
CentOS 7.x 3.10.0-1062
Ubuntu 18.x 5.0.0-1020

Alternatively install Driver: Mellanox OFED 4.6-1.0.1.1 or greater

  • Compute nodes, Driver: Mellanox OFED 4.6-1.0.1.1 or greater installed on Compute

Configurations