Jump to: navigation, search

Difference between revisions of "Mellanox-Neutron-ML2-Juno"

Line 7: Line 7:
 
Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.  
 
Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.  
  
This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin.  
+
This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin, while sriovnocswitch mechanism driver is for Ethernet and mlnx mechanism driver for Infiniband.
  
 
Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) and MACVTAP (virtual interface with a tap-like software interface) vnic types. For vnic type configuration API details, please refer to configuration reference guide (click [http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html here]). Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).  
 
Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) and MACVTAP (virtual interface with a tap-like software interface) vnic types. For vnic type configuration API details, please refer to configuration reference guide (click [http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html here]). Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).  

Revision as of 06:47, 30 June 2015

This page is still under construction



Mellanox ML2 Mechanism Driver implements the ML2 Plugin Mechanism Driver API.

This driver supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. Mellanox ML2 Mechanism Driver provides functional parity with Mellanox Neutron plugin, while sriovnocswitch mechanism driver is for Ethernet and mlnx mechanism driver for Infiniband.

Mellanox ML2 Mechanism Driver supports DIRECT (pci passthrough) and MACVTAP (virtual interface with a tap-like software interface) vnic types. For vnic type configuration API details, please refer to configuration reference guide (click here). Hardware vNICs mapped to the guest VMs allow higher performance and advanced features such as RDMA (remote direct memory access).

The driver supports VLAN network type to facilitate virtual networks either on Ethernet or InfiniBand fabrics.

  • Mellanox Openstack Neutron Agent (L2 Agent) runs on each compute node.
  • Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.


For Juno release Information refer to the relevant OS as follows: