Jump to: navigation, search

Difference between revisions of "Mellanox-Neutron"

(DHCP Server)
m (Mellanox Neutron Plugin)
 
(41 intermediate revisions by 9 users not shown)
Line 1: Line 1:
  
 
+
= Mellanox Neutron Plugin =
= Overview =
+
Mellanox supports the OpenStack Neutron releases with open source networking components. It delivers higher compute and storage performance and additional functionality, such as NIC based switching to provide better security and isolation for virtual cloud environments.
== Mellanox Neutron Plugin ==
 
The Openstack Mellanox Neutron plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA.
 
Mellanox Neutron Plugin allows hardware vNICs (based on SR-IOV virtual functions) per each Virtual Machine vNIC to have its unique
 
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).
 
  
Hardware-based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.
+
* [[Mellanox-Neutron-ML2-Train|Mellanox-Train]]
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality
+
* [[Mellanox-Neutron-ML2-Rocky|Mellanox-Rocky]]
 +
* [[Mellanox-Neutron-ML2-Queens|Mellanox-Queens]]
 +
* [[Mellanox-Neutron-ML2-Pike|Mellanox-Pike]]
 +
* [[Mellanox-Neutron-ML2-Ocata|Mellanox-Ocata]]
 +
* [[Mellanox-Neutron-ML2-Newton|Mellanox-Newton]]
 +
* [[Mellanox-Neutron-ML2-Mitaka|Mellanox-Mitaka]]
 +
* [[Mellanox-Neutron-ML2-Liberty|Mellanox-Liberty]]
 +
* [[Mellanox-Neutron-ML2-Kilo|Mellanox-Kilo]]
 +
* [[Mellanox-Neutron-ML2-Juno|Mellanox-Juno]]
 +
* [[Mellanox-Neutron-ML2-Icehouse|Mellanox-Icehouse]]
 +
* [[Mellanox-Neutron-Havana-Ubuntu|Mellanox-Havana-Ubuntu]] / [[Mellanox-Neutron-Havana-Redhat|Mellanox-Havana-RedHat]]
 +
* [[Mellanox-Neutron-Grizzly|Mellanox-Grizzly]]
 +
<br>
 +
* [[Mellanox-packages-versioning|Mellanox packages versions]]
 +
<br>
  
This plugin is implemented according to Plugin-Agent pattern.
+
= References =
 
+
1. [http://www.mellanox.com/openstack/ OpenStack solution page at Mellanox site]
 
 
            +-----------------+                      +--------------+
 
                  | Controller node |                      | Compute node |
 
        +-----------------------------------+    +-----------------------------------+
 
        |  +-----------+      +----------+  |    |  +----------+      +----------+  |
 
        |  |          |      |          |  |    |  |          |  zmq  |          |  |
 
        |  | Openstack | v2.0 | Mellanox |  | RPC |  | Mellanox |REQ/REP| Mellanox |  |
 
        |  | Neutron  +------+ Neutron  +-----------+ Neutron  +-------+ Embedded |  |
 
        |  |          |      | Plugin  |  |    |  | Agent    |      | Switch  |  |
 
        |  |          |      |          |  |    |  |          |      | (NIC)    |  |
 
        |  +-----------+      +----------+  |    |  +----------+      +----------+  |
 
        +-----------------------------------+    +-----------------------------------+
 
 
* Openstack Mellanox Neutron  Plugin implements the Neutron v2.0 API.
 
* Mellanox Neutron Plugin processes the Neutron API calls and manages network segmentation ID allocation.
 
* The plugin uses databases to store configuration and allocation mapping.
 
* The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.
 
* Mellanox Openstack Neutron Agent (L2 Agent) runs on each compute node.
 
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.
 
 
 
==  Mellanox Nova VIF Driver ==
 
The Mellanox Nova VIF driver should be used when running Mellanox Neutron Plugin. This driver supports the VIF plugin by binding vNIC (para-virtualized or SR-IOV  with optional RDMA guest access) to the embedded switch port.
 
 
 
== Prerequisites ==
 
'''The following are the Mellanox Neutron Plugin prerequisites:'''
 
 
 
1. The software package python-zmq ([https://github.com/zeromq/pyzmq github]) must be installed. EPEL repository can be used as well.
 
 
 
2. python-setuptools (use "yum install python-setuptools")
 
 
 
3. python-pip (use "yum install python-pip")
 
 
 
'''Prerequisites only for Compute Servers :'''
 
 
 
1. RH 6.3 or above.
 
 
 
2. Compute nodes should be equiped with Mellanox ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])
 
 
 
3. Mellanox OFED 2.0.3 is installed. Contact [mailto:openstack@mellanox.com?subject=MLNX_OFED2.0  openstack@mellanox.com] to retreive this version.
 
Refer to Mellanox website for the latest OFED documentation ([http://www.mellanox.com/page/products_dyn?product_family=26 link])
 
 
 
4. Enable SR-IOV on ConnectX-3 card. Refer to [http://community.mellanox.com/docs/DOC-1317 Mellanox Community]
 
 
 
5. The software package iproute2 - ([https://www.kernel.org/pub/linux/utils/net/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation]) must be installed. Reuired only to be installed on compute nodes
 
 
 
6. The software package ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code]) must be installed (version 3.8 or higher). Required only to be installed on compute nodes.
 
 
 
7. oslo.config (use "pip-python install oslo.config"). Reuired only to be installed on compute nodes.
 
 
 
8. python-ethtool and python-zmq packages
 
 
 
'''Installation with Red Hat Enterprise Linux OpenStack Platform'''
 
 
 
For each section refer to 'Installation via RPMs'. No additional Prerequisites except for those required by Red Hat Enterprise Linux OpenStack Platform. Please refer to https://access.redhat.com/products/Cloud/OpenStack/ for additional information regarding this product.
 
 
 
== Code Structure ==
 
 
 
Mellanox Neutron Plugin and the Nova VIF driver are located at [http://community.mellanox.com/docs/DOC-1187 Mellanox OpenStack RPMs]
 
 
 
1. Neutron Plugin package structure:
 
  quantum/etc/quantum/plugins/mlnx -plugin configuration
 
  mlnx_conf.ini - sample plugin configuration
 
 
 
  quantum/quantum/plugins/mlnx -  plugin code
 
  /agent - Agent code
 
  /common - common  code
 
  /db - plugin persistency model and wrapping methods
 
  mlnx_plugin.py - Mellanox Openstack Plugin
 
  rpc_callbacks.py - RPC handler for received messages
 
  agent_notify_api.py - Agent RPC notify methods
 
 
 
Mellanox Neutron Plugin is located under quantum/quantum/plugins/.
 
 
 
2. Nova VIF driver package structure is:
 
  nova/nova/mlnx - nova vif driver code
 
 
 
Mellanox Nova VIF driver is located under nova/virt/libvirt/mlnx.
 
 
 
= Mellanox Neutron Plugin Installation (for Grizzly) =
 
== On the Controller Node ==
 
In /etc/nova/nova.conf ensure you have the following line:
 
  security_group_api=nova
 
 
 
If you changed it - restart the nova services (e.g. cd /etc/init.d && for i in $( ls nova-*); do service $i restart; done)
 
 
 
== On the Neutron Server Node ==
 
 
 
=== Creating MySQL database for Neutron Server ===
 
Neutron server uses MySQL database. Make sure you have running MySQL database for quantum.
 
 
 
If Neutron server is already running, you should stop it.
 
  #/etc/init.d/quantum-server stop
 
 
 
If you want to use a database that was created you have to drop it and create another one. For example:
 
  mysql -u root -ppassword <<EOF
 
  drop database if exists quantum;
 
  create database quantum;
 
  EOF
 
 
 
If you want to create a new one:
 
 
 
  mysql -u root -ppassword <<EOF
 
  drop database if exists quantum;
 
  create database quantum;
 
  GRANT ALL PRIVILEGES ON quantum.* TO 'quantum'@'localhost' IDENTIFIED BY 'password';
 
  GRANT ALL PRIVILEGES ON quantum.* TO 'quantum' IDENTIFIED BY 'password';
 
  FLUSH PRIVILEGES;
 
  EOF
 
 
 
=== Installation via RPMs - for RH6.3 and above ===
 
 
 
Note: In case you are using Red Hat Enterprise Linux OpenStack Platform, RH6.4 is a minimum requirement
 
 
 
1. Download the Mellanox OpenStack repo file
 
  #wget http://www.mellanox.com/downloads/solutions/openstack/grizzly/repo/mlnx-grizzly/mlnx-grizzly.repo -O /etc/yum.repos.d/mlnx-grizzly.repo
 
 
 
2. Install the required RPM for the Neutron server:
 
  #yum install openstack-quantum-mlnx
 
 
 
3. Modify the /etc/quantum/quantum.conf file.
 
  core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin
 
 
 
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.  Click [[Mellanox-Neutron #Mellanox_Neutron_Plugin_Configuration | here]] for configuration options.
 
 
 
5. Change the soft link to the plugin configuration (plugin.ini)
 
 
 
  #unlink /etc/quantum/plugin.ini
 
  #ln -s /etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugin.ini
 
 
 
6. Start the Neutron server
 
  #/etc/init.d/quantum-server start
 
 
 
=== Manual installation - for other OS. ===
 
 
 
1. Make sure the Neutron server is installed and is stopped
 
  #/etc/init.d/quantum-server stop
 
  
2. Copy Mellanox OpenStack plugin to the installed Neutron plugin directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins).
+
2. [https://opendev.org/x/networking-mlnx Mellanox ML2 driver and associated tools source repository]
  #git clone https://github.com/mellanox-openstack/mellanox-quantum-plugin
 
  #cd mellanox-quantum-plugin
 
  #git checkout v0.4
 
  #cp -a quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins
 
 
 
3. Modify the /etc/quantum/quantum.conf file.
 
  core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin
 
 
 
4. Copy the Mellanox plugin configuration.
 
  #mkdir -p /etc/quantum/plugins/mlnx
 
  #cp quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx
 
 
 
5. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment. Click [[Mellanox-Neutron#Mellanox_Neutron_Plugin_Configuration | here]] for configuration options.
 
 
 
6. If you run the Neutron server using init script - change the Neutron server configuration to point to Mellanox Neutron Plugin
 
 
 
For example, In Ubuntu change /etc/default/quantum-server
 
QUANTUM_PLUGIN_CONFIG="/etc/quantum/plugins/mlnx/mlnx_conf.ini"
 
 
 
7 . Run the server.
 
  #quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini
 
or
 
  #/etc/init.d/quantum-server start
 
 
 
==  On Compute Nodes ==
 
 
 
=== The eswitchd Daemon ===
 
1. If you didn't download Mellanox OpenStack repo file download it :
 
#wget http://www.mellanox.com/downloads/solutions/openstack/grizzly/repo/mlnx-grizzly/mlnx-grizzly.repo -O /etc/yum.repos.d/mlnx-grizzly.repo
 
 
 
2. Install the eswitchd RPM:
 
#yum install eswitchd
 
 
 
3. Configure /etc/eswitchd/eswitchd.conf if needed
 
 
 
Please Refer to Mellanox Community for the eSwitchd installation notes ( click  [http://community.mellanox.com/docs/DOC-1126 here])
 
 
 
=== Nova-compute ===
 
 
 
1. Make sure nova is installed on your server.
 
   
 
2. Download and copy the Nova Mellanox VIF driver.
 
    #git clone https://github.com/mellanox-openstack/mellanox-quantum-plugin
 
    #cd mellanox-quantum-plugin
 
    #git checkout v0.4
 
    #cp -a nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt
 
 
 
or
 
 
 
  #wget https://github.com/mellanox-openstack/mellanox-quantum-plugin/archive/v0.4.tar.gz
 
  #tar zxvf v0.4
 
  #cp –a mellanox-quantum-plugin-0.4/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt
 
 
 
3. Modify /etc/nova/nova.conf
 
    compute_driver=nova.virt.libvirt.driver.LibvirtDriver
 
    libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver
 
    fabric=default - specifies physical network for vNICs (currently support one fabric per node)
 
    security_group_api=nova
 
 
 
4. Restart Nova.
 
  #/etc/init.d/openstack-nova-compute restart
 
 
 
=== Neutron Agent ===
 
 
 
Installation via RPMs - for RH6.3 and above
 
 
 
1. Install the required RPM for the Quantum agent:
 
  #yum install openstack-quantum-mlnx
 
 
 
3. Copy the  /etc/quantum/quantum.conf file from the Neutron server, and adjust it if needed.
 
 
 
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.  Click [[Mellanox-Neutron#Mellanox_Neutron_Plugin_Configuration | here]] for configuration options.
 
 
 
5. Start eswitch Daemon
 
    #/etc/init.d/eswitchd start
 
 
 
6.. Start the quantum agent
 
  #/etc/init.d/quantum-mlnx-agent start
 
 
 
Note: eswitch Daemon should be running before the Neutron Agent is started.
 
 
 
= Mellanox Neutron Plugin Configuration =
 
== Neutron Configuration ==
 
1. Make the Mellanox plugin the current Neutron plugin by editing quantum.conf and changing core_plugin.
 
  core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin
 
 
 
2. Database configuration: Install MySQL on the central server. Create a database named "quantum".
 
 
 
3. Plugin configuration:
 
Edit the configuration file: /etc/quantum/plugins/mlnx/mlnx_conf.ini
 
 
 
On central server node
 
 
 
  [DATABASE]
 
  '''sql_connection''' - The parameter should be changed to the connection of the Neutron DB used by the plugin.
 
                    It must match the mysql configuration. For example:
 
                    "mysql://quantum:password@127.0.0.1:3306/quantum"
 
  '''reconnect_interval''' = 2 (default)
 
 
 
 
 
  [MLNX]
 
  '''tenant_network_type''' - must be set on of supported tenant network types. 
 
                        Possible values: 'vlan' for Ethernet or 'ib' for Infiniband.'vlan' is the default.
 
  '''network_vlan_ranges''' - must be configured to specify the names of the physical networks
 
                        managed by the mellanox plugin, along with the ranges of VLAN IDs
 
                        available on each physical network for allocation to virtual networks.
 
                        Possible range is vlan range is 1-4093.
 
                        The default  is "default:1:100"
 
                        <fabric name >:<vlan range start>:<vlan end range>
 
 
On compute node(s)
 
 
 
  [AGENT]
 
  '''polling_interval''' - Polling interval (in seconds) for existing vNICs. The default is 2 seconds.
 
  '''rpc'''  - must be set to 'True'
 
 
 
  [ESWITCH]
 
  '''physical_interface_mapping''' -  the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. The format of this paramter is:    <fabric name>:< PF name> (Only releavant on Compute node). PF Name can either be the PF (Physical Function) Name or 'autoeth' for automatic Ethernet configuration,'autoib' for automatic Infiniband configuration.The default is  "default:autoeth". 
 
  '''vnic_type''' - type of VM network interface: 'direct' or 'hostdev' according to libvirt terminology.
 
        hostdev: this is the traditional method of assigning any generic PCI device to a guest (SR-IOV).
 
        direct: this is a method to provide macvtap device on top of the PCI device (default).
 
        bridge:  - When using Linux Bridge Plugin on top of eIPoIB device
 
  '''daemon_endpoint''' - eswitch daemon end point connection (URL) (default value='tcp://127.0.0.1:5001')
 
  '''request_timeout''' - the number of milliseconds the agent will wait for response on request to daemon. (default=3000 msec)
 
 
 
For a plugin configuration file example (Grizzly), please refer to [https://github.com/mellanox-openstack/mellanox-quantum-plugin/blob/stable/grizzly/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini Mellanox config *ini file].
 
 
 
==  Nova Configuration (Compute Node(s))  ==
 
------------------------------------
 
Edit the nova.conf file.
 
1. Configure the vif driver, and libvirt/vif type
 
  compute_driver=nova.virt.libvirt.driver.LibvirtDriver
 
  connection_type=libvirt
 
  libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver
 
2.  Configure vnic_type ('direct' or 'hostdev'). This will be the default vNIC type when creating a new vNIC. (for example, when using the dashboard it to create new vNIC, this will be the default type.
 
  vnic_type= direct
 
3. Define the embedded switch-managed physical network (currently  single fabric on node).
 
  fabric=default - specifies physical network for vNICs
 
4. Enable DHCP server to allow VMs to acquire IPs.
 
  quantum_use_dhcp=true
 
 
 
= Network Node =
 
Network node equipped with Mellanox Connectx-3 adapter card should be configured as follows:
 
 
 
1. Install Mellanox OFED.
 
 
 
2. Install Neutron linux bridge plugin and Neutron dhcp l3-agent:
 
 
 
  #yum install openstack-quantum 
 
  #yum install openstack-quantum-linuxbridge
 
 
 
3. Change the following configuration of the ini file (/etc/quantum/plugins/linuxbridge/linuxbridge_conf.ini)
 
  physical_interface_mappings = default:eth2 (when eth2 is normally the name for Ethernet interface of Mellanox adapters)
 
 
 
4. Configure the DHCP server accodring to the following guidelines
 
 
 
http://docs.openstack.org/trunk/openstack-network/admin/content/adv_cfg_dhcp_agent.html
 
 
 
5. Start the DHCP server
 
  #/etc/init.d/quantum-dhcp-agent start
 
  #/etc/init.d/quantum-linuxbridge-agent start
 
 
 
= InfiniBand support =
 
The Mellanox Neutron Plugin use InfiniBand Partitions (PKeys) to seperate Networks.
 
 
 
== Configuring Partitions ==
 
There are two options to configure InfiniBand partitions
 
 
 
1. Manual configuration:  All the PKeys should be predefined in the partitions.conf file (/etc/opensm/partitions.conf)
 
 
 
2. Automatic configuration: (Future)
 
 
 
Add/Change the following in the partitions.conf file
 
  management=0xffff,ipoib, sl=0, defmember=both : ALL, ALL_SWITCHES=full,SELF=full;
 
 
 
For every network you want to configure in Neutron you have to configure the pkey assisicated with the VLAN of this network (defined in Neutron).
 
  net1=0x1, ipoib, sl=0, defmember=full : SELF;
 
 
 
For example:
 
If we have this configuration in /etc/quantum/plugins/mlnx/mlnx_conf.ini
 
 
 
  [MLNX]
 
  network_vlan_ranges = default:1:10
 
 
 
We'll have the following configuration of the the partitions.conf file:
 
 
  management=0xffff,ipoib, sl=0, defmember=both : ALL, ALL_SWITCHES=full,SELF=full;
 
  net1=0x1, ipoib, sl=0, defmember=full : ALL;
 
  net2=0x2, ipoib, sl=0, defmember=full : ALL;
 
  net3=0x3, ipoib, sl=0, defmember=full : ALL;
 
  net4=0x4, ipoib, sl=0, defmember=full : ALL;
 
  net5=0x5, ipoib, sl=0, defmember=full : ALL;
 
  net6=0x6, ipoib, sl=0, defmember=full : ALL;
 
  net7=0x7, ipoib, sl=0, defmember=full : ALL;
 
  net8=0x8, ipoib, sl=0, defmember=full : ALL;
 
  net9=0x9, ipoib, sl=0, defmember=full : ALL;
 
  net10=0xa, ipoib, sl=0, defmember=full : ALL;
 
 
 
Change the following in /etc/opensm/opensm.conf:
 
  allow_both_pkeys TRUE
 
 
 
 
 
Restart the openSM
 
  #/etc/init.d/opensmd restart
 
 
 
== Configuring Qunatum Server and Compute Nodes==
 
=== SR-IOV (passthrough mode) ===
 
 
 
On the controller, change the following in  the file /etc/quantum/plugins/mlnx/mlnx_conf.ini
 
  tenant_network_type = ib
 
  vnic_type = hostdev
 
  network_vlan_ranges = default:1:10
 
  (sql_connection and reconnect_interval  can be configured as described above)
 
 
 
The mapping between VLAN and PKEY is as follows: VLAN X = PKEY 0x8000 + X. For example: vlan 2 is pkey 0x8002
 
 
 
On the Compute Node, change the following:
 
In  The file /etc/quantum/plugins/mlnx/mlnx_conf.ini
 
    physical_interface_mapping = default:autoib
 
 
 
Tenant_network_type , vnic_type and network_vlan_ranges parameters should be configured as the controller.
 
 
 
autoib can be replace by the name of the PF. 
 
 
 
 
 
Change the file /etc/eswitchd/eswitchd.conf
 
  fabrics  = default:autoib (or default:ib0)
 
 
 
eswitch should be started and then Neutron agent should be started
 
  /etc/init.d/eswitchd restart
 
  /etc/init.d/quantum-mlnx-agent restart
 
 
 
Verify Mellanox VIF driver is configured in /etc/nova/nova.conf
 
  libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver
 
 
 
Restart nova
 
  #/etc/init.d/openstack-nova-compute restart
 
 
 
=== Para-Virtualized ===
 
Using the Mellanox Neutron Plugin on the controller  and the Linux bridge plugin on the compute node
 
 
 
On the controller, configure the file /etc/quantum/plugins/mlnx/mlnx_conf.ini
 
  tenant_network_type = ib
 
  vnic_type = bridge
 
 
 
Here we use the Linux Bridge plugin.
 
 
 
eIPoIB module should be up and configured.In /etc/infiniband/openib.conf:
 
E_IPOIB_LOAD=yes
 
 
 
And restart openibd:
 
#/etc/init.d/openibd restart
 
 
 
Please refer to eIPoIB configuration in Mellanox OFED User Manual<br />
 
 
 
Once we have the eIPoIB interface, we use it in the linux bridge agent configuration:
 
 
 
For Example:
 
Assuming eth1 is the eIPoIB interface:
 
 
 
To check that the interface type is eIPoIB run the command (verify that the driver is "eth_ipoib")
 
  #ethtool -i <interface>
 
 
 
Change the file /etc/quantum/plugins/linuxbridge/linuxbridge_conf.ini
 
 
 
  [linux_bridge]
 
  physical_interface_mappings: default:eth1 
 
 
 
Configure the Linux Bridge VIF Driver in /etc/nova/nova.conf
 
  #libvirt_vif_driver=nova.virt.libvirt.vif.QuantumLinuxBridgeVIFDriver
 
 
 
Restart nova
 
  #/etc/init.d/openstack-nova-compute restart
 
 
 
After confiuration Linux Bridge agent should be restarted.
 
  #/etc/init.d/quantum-linuxbridge-agent restart
 
 
 
== Configuring Network Node ==
 
 
 
Configure the Linux bridge plugin as described in InfiniBand Para-Virtualized section (above)
 
To make use of DHCP in Paravitualized mode (with Linux Bridge) you have to make sure 'bootp-broadcast-always' option is configured in the Instance dhclient.conf file.
 
 
 
=== DHCP Server ===
 
 
 
For DHCP support – The Network node should  use the Mellanox Dnsmasq driver as the DHCP driver.
 
Get the code
 
  #git clone https://github.com/mellanox-openstack/mellanox-quantum-plugin
 
  #cd mellanox-quantum-plugin
 
  #git checkout v0.4
 
 
 
Copy the Mellanox Dnsmasq driver:
 
  #cp quantum/agent/linux/mlnx_dhcp.py /usr/lib/python2.6/site-packages/quantum/agent/linux   
 
  (The path can be different if you are using other Distribution or other python version)
 
 
 
Change the following in /etc/quantum/dhcp_agent.ini
 
 
 
  dhcp_driver=quantum.agent.linux.mlnx_dhcp.MlnxDnsmasq
 
 
 
Copy ipoibd  and override /sbin/ipoibd
 
  #wget http://www.mellanox.com/downloads/solutions/openstack/grizzly/0.3/ipoibd
 
 
 
Restart openibd
 
  #/etc/init.d/openibd restart
 
 
 
Start linux bridge agent
 
  /etc/init.d/quantum-linuxbridge-agent restart
 
 
 
Start dhcp server
 
  /etc/init.d/quantum-dhcp-agent restart
 
 
 
= Usage Examples =
 
* In order to create SR-IOV interface refer to [http://www.mellanox.com/sdn/stage/pdf/Mellanox-OpenStack-OpenFlow-Solution.pdf Mellanox OpenStack solution document "Creating an SR-IOV Instance" chapter]
 
* In order to create Para-Virtualized interface refer to [http://www.mellanox.com/sdn/stage/pdf/Mellanox-OpenStack-OpenFlow-Solution.pdf Mellanox OpenStack solution document "Creating a Para-Virtualized vNIC Instance" chapter]
 
 
 
= References =
 
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]
 
  
2. [https://github.com/mellanox-openstack Source repository]
+
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED web page]
  
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]
+
4. [http://community.mellanox.com/community/develop/cloud-developers Mellanox Cloud Developers Community]
  
4. [http://www.mellanox.com/sdn/stage/pdf/Mellanox-OpenStack-OpenFlow-Solution.pdf Mellanox OpenStack Solution Reference Architecture]
+
5. [https://github.com/mellanox-openstack Old/previous Mellanox source repository]
  
 
For more details, please refer your question to  [mailto:openstack@mellanox.com openstack@mellanox.com]
 
For more details, please refer your question to  [mailto:openstack@mellanox.com openstack@mellanox.com]
  
Return to [https://wiki.openstack.org/wiki/Mellanox-OpenStack  Mellanox-OpenStack] wiki page.
+
[[Category: Neutron]]

Latest revision as of 12:37, 4 September 2019

Mellanox Neutron Plugin

Mellanox supports the OpenStack Neutron releases with open source networking components. It delivers higher compute and storage performance and additional functionality, such as NIC based switching to provide better security and isolation for virtual cloud environments.



References

1. OpenStack solution page at Mellanox site

2. Mellanox ML2 driver and associated tools source repository

3. Mellanox OFED web page

4. Mellanox Cloud Developers Community

5. Old/previous Mellanox source repository

For more details, please refer your question to openstack@mellanox.com