Jump to: navigation, search

Difference between revisions of "Mellanox-Neutron-Kilo-InfiniBand"

m
m
 
Line 4: Line 4:
 
This driver supports Mellanox embedded switch functionality as part of the InfiniBand HCA. Mellanox ML2 Mechanism Driver provides a functional parity with Mellanox Neutron plugin.  
 
This driver supports Mellanox embedded switch functionality as part of the InfiniBand HCA. Mellanox ML2 Mechanism Driver provides a functional parity with Mellanox Neutron plugin.  
  
Mellanox ML2 Mechanism Driver supports DIRECT (PCI passthrough) vNIC type. For vNIC type configuration API details, please refer to configuration reference guide. Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as  Remote Direct Memory Access (RDMA).
+
Mellanox ML2 Mechanism Driver supports DIRECT (PCI passthrough) vNIC type. For vNIC type configuration API details, please refer to [https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking#VM_creation_flow_with_SR-IOV_vNIC configuration reference guide]. Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as  Remote Direct Memory Access (RDMA).
  
 
The driver supports VLAN network type to facilitate virtual networks on InfiniBand fabrics.  
 
The driver supports VLAN network type to facilitate virtual networks on InfiniBand fabrics.  

Latest revision as of 15:34, 31 October 2016

Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.

This driver supports Mellanox embedded switch functionality as part of the InfiniBand HCA. Mellanox ML2 Mechanism Driver provides a functional parity with Mellanox Neutron plugin.

Mellanox ML2 Mechanism Driver supports DIRECT (PCI passthrough) vNIC type. For vNIC type configuration API details, please refer to configuration reference guide. Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as Remote Direct Memory Access (RDMA).

The driver supports VLAN network type to facilitate virtual networks on InfiniBand fabrics.

  • Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.
  • Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and an embedded switch port.


Prerequisites

  • All nodes equipped with Mellanox ConnectX®-3/Mellanox ConnectX®-3 PRO Network Adapter
  • Mellanox OFED 2.4 or above installed on all nodes. Please refer to Mellanox website for the latest OFED version


For Kilo release information, please refer to the relevant OS as follows: