Jump to: navigation, search

Difference between revisions of "Mellanox-Neutron-Havana"

(Para-Virtualized)
(Replaced content with " For Ubuntu OpenStack click [https://wiki.openstack.org/wiki/Mellanox-Neutron-Havana-Ubuntu here] For Red Hat OpenStack click [https://wiki.openstack.org/wiki/Mellanox-N...")
 
(44 intermediate revisions by 3 users not shown)
Line 1: Line 1:
  
 
+
For Ubuntu  OpenStack click [https://wiki.openstack.org/wiki/Mellanox-Neutron-Havana-Ubuntu here]
= Overview =
 
== Mellanox Neutron Plugin ==
 
The Openstack Mellanox Neutron plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA.  
 
Mellanox Neutron Plugin allows hardware vNICs (based on SR-IOV virtual functions) per each Virtual Machine vNIC to have its unique
 
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).
 
  
Hardware-based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.
+
For Red Hat OpenStack click [https://wiki.openstack.org/wiki/Mellanox-Neutron-Havana-Redhat here]
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality
 
 
 
This plugin is implemented according to Plugin-Agent pattern.
 
 
 
 
 
            +-----------------+                      +--------------+
 
                  | Controller node |                      | Compute node |
 
        +-----------------------------------+    +-----------------------------------+
 
        |  +-----------+      +----------+  |    |  +----------+      +----------+  |
 
        |  |          |      |          |  |    |  |          |  zmq  |          |  |
 
        |  | Openstack | v2.0 | Mellanox |  | RPC |  | Mellanox |REQ/REP| Mellanox |  |
 
        |  | Neutron  +------+ Neutron  +-----------+ Neutron  +-------+ Embedded |  |
 
        |  |          |      | Plugin  |  |    |  | Agent    |      | Switch  |  |
 
        |  |          |      |          |  |    |  |          |      | (NIC)    |  |
 
        |  +-----------+      +----------+  |    |  +----------+      +----------+  |
 
        +-----------------------------------+    +-----------------------------------+
 
 
* Openstack Mellanox Neutron  Plugin implements the Neutron v2.0 API.
 
* Mellanox Neutron Plugin processes the Neutron API calls and manages network segmentation ID allocation.
 
* The plugin uses databases to store configuration and allocation mapping.
 
* The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.
 
* Mellanox Openstack Neutron Agent (L2 Agent) runs on each compute node.
 
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.
 
 
 
==  Mellanox Nova VIF Driver ==
 
The Mellanox Nova VIF driver should be used when running Mellanox Neutron Plugin. This driver supports the VIF plugin by binding vNIC (para-virtualized or SR-IOV  with optional RDMA guest access) to the embedded switch port.
 
VIF Driver for Para Virtualized mode is included in Nova. For SR-IOV pass-through one needs to use VIF driver from Mellanox git repository or RPM.
 
 
 
== Prerequisites ==
 
 
 
Make use you have RHOS or RDO repositories installed on all nodes in your cluster.
 
 
 
For RHOS installation refer to [http://www.redhat.com/resourcelibrary/reference-architectures/deploying-and-using-red-hat-openstack-rhos-2-dot-1 RHOS reference document]
 
 
 
For RDO installation, refer to [http://openstack.redhat.com/Quickstart RDO QuickStart]
 
 
 
'''Neutron server'''
 
 
 
No special prerequisites needed.
 
 
 
'''Compute Note'''
 
 
 
1. The software package python-zmq ([https://github.com/zeromq/pyzmq github]) must be installed. EPEL repository can be used as well.
 
 
 
2. python-setuptools (use "yum install python-setuptools")
 
 
 
3. python-pip (use "yum install python-pip")
 
 
 
4. Compute nodes should be equipped with Mellanox ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])
 
 
 
5. Mellanox OFED 2.0.3 is installed.Refer to Mellanox website for the latest OFED ([http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers])
 
 
 
6. Enable SR-IOV on ConnectX-3 card. Refer to [http://community.mellanox.com/docs/DOC-1317 Mellanox Community]
 
 
 
7. The software package iproute2 - ([https://www.kernel.org/pub/linux/utils/net/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation]) must be installed. Required only to be installed on compute nodes
 
 
 
8. The software package ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code]) must be installed (version 3.8 or higher). Required only to be installed on compute nodes.
 
 
 
9. oslo.config (use "pip-python install oslo.config"). Reuired only to be installed on compute nodes.
 
 
 
10. python-ethtool and python-zmq packages
 
 
 
 
 
'''Network Node'''
 
 
 
1. Compute nodes should be equipped with Mellanox ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])
 
 
 
2. Mellanox OFED 2.0.3 is installed. Refer to Mellanox website for the latest OFED ([http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers])
 
 
 
 
 
'''Installation with Red Hat Enterprise Linux OpenStack Platform'''
 
 
 
For each section refer to 'Installation via RPMs'. No additional Prerequisites except for those required by Red Hat Enterprise Linux OpenStack Platform. Please refer to https://access.redhat.com/products/Cloud/OpenStack/ for additional information regarding this product.
 
 
 
= Controller Node =
 
In /etc/nova/nova.conf ensure you have the following line:
 
  security_group_api=nova
 
 
 
If you changed it - restart the nova services (e.g. cd /etc/init.d && for i in $( ls nova-*); do service $i restart; done)
 
 
 
= Neutron Server Node =
 
 
 
== Installation  ==
 
 
 
Note: In case you are using Red Hat Enterprise Linux OpenStack (RHOS) Platform, RH6.4 is a minimum requirement.
 
 
 
1. Neutron server uses MySQL database. Make sure you have running MySQL database for neutron.
 
 
 
If Neutron server is already running, you should stop it.
 
  #/etc/init.d/neutron-server stop
 
 
 
If you want to use a database that was created you have to drop it and create another one. For example:
 
  mysql -u root -ppassword <<EOF
 
  drop database if exists neutron;
 
  create database neutron;
 
  EOF
 
 
 
If you want to create a new one:
 
 
 
  mysql -u root -ppassword <<EOF
 
  drop database if exists neutron;
 
  create database neutron;
 
  GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'password';
 
  GRANT ALL PRIVILEGES ON neutron.* TO 'neutron' IDENTIFIED BY 'password';
 
  FLUSH PRIVILEGES;
 
  EOF
 
 
 
 
 
2. Install the required RPM for the Mellanox Neutron plugin:
 
  #yum install openstack-neutron-mellanox
 
 
 
3. Modify the /etc/neutron/neutron.conf file.
 
  core_plugin = neutron.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin
 
 
 
4. Modify the /etc/neutron/plugins/mlnx/mlnx_conf.ini file to reflect your environment.  Click [[Mellanox-Neutron #Mellanox_Neutron_Plugin_Configuration | here]] for configuration options.
 
 
 
5. Change the soft link to the plugin configuration (plugin.ini)
 
 
 
  #unlink /etc/neutron/plugin.ini
 
  #ln -s /etc/neutron/plugins/mlnx/mlnx_conf.ini /etc/neutron/plugin.ini
 
 
 
== Configuration ==
 
 
 
1. Make the Mellanox plugin the current Neutron plugin by editing neutron.conf and changing core_plugin.
 
  core_plugin = neutron.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin
 
 
 
2. Plugin configuration:
 
Edit the configuration file: /etc/neutron/plugins/mlnx/mlnx_conf.ini
 
 
 
  [DATABASE]
 
  '''sql_connection''' - The parameter should be changed to the connection of the Neutron DB used by the plugin.
 
                    It must match the mysql configuration. For example:
 
                    "mysql://neutron:password@127.0.0.1:3306/neutron"
 
  '''reconnect_interval''' = 2 (default)
 
 
 
 
 
  [MLNX]
 
  '''tenant_network_type''' - must be set on of supported tenant network types. 
 
                        Possible values: 'vlan' for Ethernet or 'ib' for Infiniband.'vlan' is the default.
 
  '''network_vlan_ranges''' - must be configured to specify the names of the physical networks
 
                        managed by the mellanox plugin, along with the ranges of VLAN IDs
 
                        available on each physical network for allocation to virtual networks.
 
                        Possible range is vlan range is 1-4093.
 
                        The default  is "default:1:100"
 
                        <fabric name >:<vlan range start>:<vlan end range>
 
== Start Services ==
 
Start (or restart) the Neutron server
 
  #/etc/init.d/neutron-server start
 
 
 
=  Compute Nodes =
 
 
 
== Installation ==
 
1. If you didn't download Mellanox OpenStack repo file download it :
 
#wget http://www.mellanox.com/downloads/solutions/openstack/havana/repo/mlnx-havana/mlnx-havana.repo -O /etc/yum.repos.d/mlnx-havana.repo
 
 
 
2. Install the eswitchd RPM:
 
#yum install eswitchd
 
 
 
3. In case you would like to use Ethernet in para-virtualized mode the VIF driver is already included in Nova package. Otherwise, Install Mellanox VIF driver (Make sure nova is installed on your server)
 
#yum install mlnxvif
 
 
 
4. Install the required RPM for the Neutron agent:
 
#yum install openstack-neutron-mellanox
 
 
 
== Configuration ==
 
 
 
1. Configure /etc/eswitchd/eswitchd.conf if needed
 
 
 
Please Refer to Mellanox Community for the eSwitchd installation notes ( click [http://community.mellanox.com/docs/DOC-1126 here])
 
 
 
2. Modify /etc/nova/nova.conf
 
 
 
    compute_driver=nova.virt.libvirt.driver.LibvirtDriver
 
    libvirt_vif_driver=mlnxvif.vif.MlxEthVIFDriver
 
    security_group_api=nova
 
    connection_type=libvirt
 
 
 
In case you didn't install the Mellanox VIF driver, and you plan to use Ethernet only in paravirtualized ,mode: change the following:
 
    libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
 
 
 
3. Modify the /etc/neutron/plugins/mlnx/mlnx_conf.ini file to reflect your environment.
 
 
 
  [AGENT]
 
  '''polling_interval''' - Polling interval (in seconds) for existing vNICs. The default is 2 seconds.
 
  '''rpc'''  - must be set to 'True'
 
 
 
  [ESWITCH]
 
  '''physical_interface_mapping''' -  the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. The format of this paramter is:    <fabric name>:< PF name> (Only releavant on Compute node). PF Name can either be the PF (Physical Function) Name or 'autoeth' for automatic Ethernet configuration,'autoib' for automatic Infiniband configuration.The default is  "default:autoeth". 
 
  '''vnic_type''' - type of VM network interface: 'direct' or 'hostdev' according to libvirt terminology.
 
        hostdev: this is the traditional method of assigning any generic PCI device to a guest (SR-IOV).
 
        direct: this is a method to provide macvtap device on top of the PCI device (default).
 
        bridge:  - When using Linux Bridge Plugin on top of eIPoIB device
 
  '''daemon_endpoint''' - eswitch daemon end point connection (URL) (default value='tcp://127.0.0.1:5001')
 
  '''request_timeout''' - the number of milliseconds the agent will wait for response on request to daemon. (default=3000 msec)
 
 
 
For a plugin configuration file example (Havana), please refer to [https://github.com/openstack/neutron/blob/stable/havana/etc/neutron/plugins/mlnx/mlnx_conf.inii Mellanox config *ini file].
 
 
 
4. Enable DHCP server to allow VMs to acquire IPs.
 
  neutron_use_dhcp=true
 
 
 
== Start Services ==
 
 
 
1. Restart Nova.
 
  #/etc/init.d/openstack-nova-compute restart
 
 
 
2. Start eswitch Daemon
 
    #/etc/init.d/eswitchd start
 
 
 
3. Start the Neutron agent
 
  #/etc/init.d/neutron-mlnx-agent start
 
 
 
Note: eswitch Daemon should be running before the Neutron Agent is started.
 
 
 
= Network Node =
 
Network node equipped with Mellanox Connectx-3 adapter card should be configured as follows:
 
 
 
1. Install Neutron linux bridge plugin and Neutron dhcp l3-agent:
 
 
 
  #yum install openstack-neutron-linuxbridge
 
 
 
2. Change the following configuration of the ini file (/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini)
 
  physical_interface_mappings = default:eth2 (when eth2 is normally the name for Ethernet interface of Mellanox adapters)
 
 
 
3. Configure the DHCP server accodring to the following guidelines
 
 
 
http://docs.openstack.org/trunk/openstack-network/admin/content/adv_cfg_dhcp_agent.html
 
 
 
4. Start the DHCP server
 
  #/etc/init.d/neutron-dhcp-agent start
 
  #/etc/init.d/neutron-linuxbridge-agent start
 
 
 
= Mellanox Neutron Plugin Configuration =
 
== Neutron Configuration ==
 
1. Make the Mellanox plugin the current Neutron plugin by editing neutron.conf and changing core_plugin.
 
  core_plugin = neutron.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin
 
 
 
2. Database configuration: Install MySQL on the central server. Create a database named "neutron".
 
 
 
3. Plugin configuration:
 
Edit the configuration file: /etc/neutron/plugins/mlnx/mlnx_conf.ini
 
 
 
On central server node
 
 
 
  [DATABASE]
 
  '''sql_connection''' - The parameter should be changed to the connection of the Neutron DB used by the plugin.
 
                    It must match the mysql configuration. For example:
 
                    "mysql://neutron:password@127.0.0.1:3306/neutron"
 
  '''reconnect_interval''' = 2 (default)
 
 
 
 
 
  [MLNX]
 
  '''tenant_network_type''' - must be set on of supported tenant network types. 
 
                        Possible values: 'vlan' for Ethernet or 'ib' for Infiniband.'vlan' is the default.
 
  '''network_vlan_ranges''' - must be configured to specify the names of the physical networks
 
                        managed by the mellanox plugin, along with the ranges of VLAN IDs
 
                        available on each physical network for allocation to virtual networks.
 
                        Possible range is vlan range is 1-4093.
 
                        The default  is "default:1:100"
 
                        <fabric name >:<vlan range start>:<vlan end range>
 
 
On compute node(s)
 
 
 
  [AGENT]
 
  '''polling_interval''' - Polling interval (in seconds) for existing vNICs. The default is 2 seconds.
 
  '''rpc'''  - must be set to 'True'
 
 
 
  [ESWITCH]
 
  '''physical_interface_mapping''' -  the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. The format of this paramter is:    <fabric name>:< PF name> (Only releavant on Compute node). PF Name can either be the PF (Physical Function) Name or 'autoeth' for automatic Ethernet configuration,'autoib' for automatic Infiniband configuration.The default is  "default:autoeth". 
 
  '''vnic_type''' - type of VM network interface: 'direct' or 'hostdev' according to libvirt terminology.
 
        hostdev: this is the traditional method of assigning any generic PCI device to a guest (SR-IOV).
 
        direct: this is a method to provide macvtap device on top of the PCI device (default).
 
        bridge:  - When using Linux Bridge Plugin on top of eIPoIB device
 
  '''daemon_endpoint''' - eswitch daemon end point connection (URL) (default value='tcp://127.0.0.1:5001')
 
  '''request_timeout''' - the number of milliseconds the agent will wait for response on request to daemon. (default=3000 msec)
 
 
 
For a plugin configuration file example (Havana), please refer to [https://github.com/openstack/neutron/blob/stable/havana/etc/neutron/plugins/mlnx/mlnx_conf.inii Mellanox config *ini file].
 
 
 
==  Nova Configuration (Compute Node(s))  ==
 
------------------------------------
 
Edit the nova.conf file.
 
1. Configure the vif driver, and libvirt/vif type
 
  compute_driver=nova.virt.libvirt.driver.LibvirtDriver
 
  connection_type=libvirt
 
  libvirt_vif_driver=mlnxvif.vif.MlxEthVIFDriver
 
2. Enable DHCP server to allow VMs to acquire IPs.
 
  neutron_use_dhcp=true
 
 
 
= InfiniBand support =
 
The Mellanox Neutron Plugin use InfiniBand Partitions (PKeys) to seperate Networks.
 
 
 
== Configuring Partitions ==
 
There are two options to configure InfiniBand partitions
 
 
 
1. Manual configuration:  All the PKeys should be predefined in the partitions.conf file (/etc/opensm/partitions.conf)
 
 
 
2. Automatic configuration: (Future)
 
 
 
Add/Change the following in the partitions.conf file
 
  management=0xffff,ipoib, sl=0, defmember=both : ALL, ALL_SWITCHES=full,SELF=full;
 
 
 
For every network you want to configure in Neutron you have to configure the pkey assisicated with the VLAN of this network (defined in Neutron).
 
  net1=0x1, ipoib, sl=0, defmember=full : SELF;
 
 
 
For example:
 
If we have this configuration in /etc/neutron/plugins/mlnx/mlnx_conf.ini
 
 
 
  [MLNX]
 
  network_vlan_ranges = default:1:10
 
 
 
We'll have the following configuration of the the partitions.conf file:
 
 
  management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
 
  net1=0x1, ipoib, sl=0, defmember=full : ALL;
 
  net2=0x2, ipoib, sl=0, defmember=full : ALL;
 
  net3=0x3, ipoib, sl=0, defmember=full : ALL;
 
  net4=0x4, ipoib, sl=0, defmember=full : ALL;
 
  net5=0x5, ipoib, sl=0, defmember=full : ALL;
 
  net6=0x6, ipoib, sl=0, defmember=full : ALL;
 
  net7=0x7, ipoib, sl=0, defmember=full : ALL;
 
  net8=0x8, ipoib, sl=0, defmember=full : ALL;
 
  net9=0x9, ipoib, sl=0, defmember=full : ALL;
 
  net10=0xa, ipoib, sl=0, defmember=full : ALL;
 
 
 
Change the following in /etc/opensm/opensm.conf:
 
  allow_both_pkeys TRUE
 
 
 
 
 
Restart the openSM
 
  #/etc/init.d/opensmd restart
 
 
 
== Configuring Neutron Server and Compute Nodes==
 
=== SR-IOV (passthrough mode) ===
 
 
 
On the controller, change the following in  the file /etc/neutron/plugins/mlnx/mlnx_conf.ini
 
  tenant_network_type = ib
 
  vnic_type = hostdev
 
  network_vlan_ranges = default:1:10
 
  (sql_connection and reconnect_interval  can be configured as described above)
 
 
 
The mapping between VLAN and PKEY is as follows: VLAN X = PKEY 0x8000 + X. For example: vlan 2 is pkey 0x8002
 
 
 
On the Compute Node, change the following:
 
In  The file /etc/neutron/plugins/mlnx/mlnx_conf.ini
 
    physical_interface_mapping = default:autoib
 
 
 
Tenant_network_type , vnic_type and network_vlan_ranges parameters should be configured as the controller.
 
 
 
autoib can be replaced by the name of the PF. 
 
 
 
 
 
Change the file /etc/eswitchd/eswitchd.conf
 
  fabrics  = default:autoib (or default:ib0)
 
 
 
eswitch should be started and then Neutron agent should be started
 
  /etc/init.d/eswitchd restart
 
  /etc/init.d/neutron-mlnx-agent restart
 
 
 
Verify Mellanox VIF driver is configured in /etc/nova/nova.conf
 
  libvirt_vif_driver=mlnxvif.vif.MlxEthVIFDriver
 
 
 
Restart nova
 
  #/etc/init.d/openstack-nova-compute restart
 
 
 
=== Para-Virtualized ===
 
Using the Mellanox Neutron Plugin on the controller  and the Linux bridge plugin on the compute node
 
 
 
On the controller, configure the file /etc/neutron/plugins/mlnx/mlnx_conf.ini
 
  tenant_network_type = ib
 
  vnic_type = bridge
 
 
 
Here we use the Linux Bridge plugin.
 
 
 
eIPoIB module should be up and configured.In /etc/infiniband/openib.conf:
 
E_IPOIB_LOAD=yes
 
 
 
And restart openibd:
 
#/etc/init.d/openibd restart
 
 
 
Please refer to eIPoIB configuration in Mellanox OFED User Manual<br />
 
 
 
Once we have the eIPoIB interface, we use it in the linux bridge agent configuration:
 
 
 
For Example:
 
Assuming eth1 is the eIPoIB interface:
 
 
 
To check that the interface type is eIPoIB run the command (verify that the driver is "eth_ipoib")
 
  #ethtool -i <interface>
 
 
 
Change the file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
 
 
 
  [linux_bridge]  
 
  physical_interface_mappings =  default:eth1 
 
 
 
Configure the Linux Bridge VIF Driver in /etc/nova/nova.conf
 
  #libvirt_vif_driver=nova.virt.libvirt.vif.NeutronLinuxBridgeVIFDriver
 
 
 
Restart nova
 
  #/etc/init.d/openstack-nova-compute restart
 
 
 
After confiuration Linux Bridge agent should be restarted.
 
  #/etc/init.d/neutron-linuxbridge-agent restart
 
 
 
== Configuring Network Node ==
 
 
 
Configure the Linux bridge plugin as described in InfiniBand Para-Virtualized section (above)
 
To make use of DHCP in Paravitualized mode (with Linux Bridge) you have to make sure 'bootp-broadcast-always' option is configured in the Instance dhclient.conf file.
 
 
 
=== DHCP Server ===
 
 
 
For DHCP support – The Network node should  use the Mellanox Dnsmasq driver as the DHCP driver.
 
    # yum install mlnx-dnsmasq
 
 
 
Change the following in /etc/neutron/dhcp_agent.ini
 
  dhcp_driver = mlnx_dhcp.MlnxDnsmasq
 
 
 
Copy ipoibd  and override /sbin/ipoibd
 
  #wget http://www.mellanox.com/downloads/solutions/openstack/havana/ipoibd
 
  #cp ipoibd /sbin/ipoibd
 
 
 
Restart openibd
 
  #/etc/init.d/openibd restart
 
 
 
Start linux bridge agent
 
  /etc/init.d/neutron-linuxbridge-agent restart
 
 
 
Start dhcp server
 
  /etc/init.d/neutron-dhcp-agent restart
 
 
 
= Usage Examples =
 
* In order to create SR-IOV interface refer to [http://www.mellanox.com/sdn/stage/pdf/Mellanox-OpenStack-OpenFlow-Solution.pdf Mellanox OpenStack solution document "Creating an SR-IOV Instance" chapter]
 
* In order to create Para-Virtualized interface refer to [http://www.mellanox.com/sdn/stage/pdf/Mellanox-OpenStack-OpenFlow-Solution.pdf Mellanox OpenStack solution document "Creating a Para-Virtualized vNIC Instance" chapter]
 
 
 
= Known issues and Troubleshooting =
 
 
 
For known issues and troubleshooting options refer to  [http://community.mellanox.com/docs/DOC-1127 Mellanox OpenStack Troubleshooting].
 
 
 
= References =
 
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]
 
 
 
2. [https://github.com/mellanox-openstack Source repository]
 
 
 
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]
 
 
 
4. [http://www.mellanox.com/sdn/stage/pdf/Mellanox-OpenStack-OpenFlow-Solution.pdf Mellanox OpenStack Solution Reference Architecture]
 
 
 
5. [http://community.mellanox.com/docs/DOC-1127 Mellanox OpenStack Troubleshooting]
 
 
 
For more details, please refer your question to  [mailto:openstack@mellanox.com openstack@mellanox.com]
 
 
 
Return to [https://wiki.openstack.org/wiki/Mellanox-OpenStack  Mellanox-OpenStack] wiki page.
 

Latest revision as of 11:33, 5 January 2014

For Ubuntu OpenStack click here

For Red Hat OpenStack click here