Jump to: navigation, search

Difference between revisions of "Mellanox-Neutron-ML2-Kilo"

 
(19 intermediate revisions by 2 users not shown)
Line 1: Line 1:
  
<big>'''This page is still under construction''' </big>
+
Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.
  
 +
<big>'''This page is deprecated.'''</big> Please refer to [[Mellanox-Neutron-Kilo-InfiniBand]]
  
  
  
Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.  
+
This driver supports Mellanox embedded switch functionality as part of the InfiniBand HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.
 +
 
 +
Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide. Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).  
  
This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.  
+
The driver supports VLAN network type to facilitate virtual networks on InfiniBand fabrics.
 +
* Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.  
 +
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.
  
Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) and MACVTAP (virtual interface with a tap-like software interface) vnic types. For vnic type configuration API details, please refer to configuration reference guide (click [http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html here]). Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).
 
  
The driver supports VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.  
+
'''Prerequisites'''
 +
* All nodes equipped with Mellanox ConnectX®-3/Mellanox ConnectX®-3 PRO [http://www.mellanox.com/page/products_dyn?product_family=119 Network Adapter]
 +
* Mellanox OFED 2.4 or greater installed on all nodes. Please refer to Mellanox website for the latest [http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers OFED]
  
* Mellanox Openstack Neutron Agent (L2 Agent) runs on each compute node.
 
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.
 
  
 +
For '''Kilo''' release Information refer to the relevant OS as follows:
  
For '''Icehouse''' release Information refer to the relevant OS as follows:
+
Infiniband
 +
* [[Mellanox-Neutron-Kilo-Redhat-InfiniBand | Redhat7]]
 +
* [[Mellanox-Neutron-Kilo-Ubuntu-InfiniBand | Ubuntu14.04]]
  
* [[Mellanox-Neutron-Kilo-Redhat-Ethernet|Redhat Ethernet]]
+
Ethernet(SR-IOV)
* [[Mellanox-Neutron-Kilo-Ubuntu-Ethernet|Ubuntu Ethernet]] ( currently not updated )
+
* [[Mellanox-Neutron-Kilo-Redhat-Ethernet |Redhat7/Ubuntu14.04]]
* [[Mellanox-Neutron-Kilo-Redhat-InfiniBand|Redhat InfiniBand]]
 
* [[Mellanox-Neutron-Kilo-Ubuntu-InfiniBand|Ubuntu InfiniBand]]  ( currently not updated )
 

Latest revision as of 09:04, 6 October 2016

Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.

This page is deprecated. Please refer to Mellanox-Neutron-Kilo-InfiniBand


This driver supports Mellanox embedded switch functionality as part of the InfiniBand HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.

Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) vnic type. For vnic type configuration API details, please refer to configuration reference guide. Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).

The driver supports VLAN network type to facilitate virtual networks on InfiniBand fabrics.

  • Mellanox OpenStack Neutron Agent (L2 Agent) runs on each compute node.
  • Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.


Prerequisites

  • All nodes equipped with Mellanox ConnectX®-3/Mellanox ConnectX®-3 PRO Network Adapter
  • Mellanox OFED 2.4 or greater installed on all nodes. Please refer to Mellanox website for the latest OFED


For Kilo release Information refer to the relevant OS as follows:

Infiniband

Ethernet(SR-IOV)