https://wiki.openstack.org/w/api.php?action=feedcontributions&user=Amir+Naddaf&feedformat=atomOpenStack - User contributions [en]2024-03-29T11:59:31ZUser contributionsMediaWiki 1.28.2https://wiki.openstack.org/w/index.php?title=SR-IOV-Passthrough-For-Networking&diff=64831SR-IOV-Passthrough-For-Networking2014-10-12T11:42:05Z<p>Amir Naddaf: /* Compute */</p>
<hr />
<div>=SR-IOV Networking in OpenStack Juno= <br />
OpenStack Juno adds inbox support to request VM access to virtual network via SR-IOV NIC. With the introduction of SR-IOV based NICs, the traditional virtual bridge is no longer required. Each SR-IOV port is associated with a virtual function (VF). SR-IOV ports may be provided by Hardware-based Virtual Ethernet Bridging (HW VEB); or they may be extended to an upstream physical switch (IEEE 802.1br). <br />
There are two ways that SR-IOV port may be connected:<br />
* directly connected to its VF<br />
* connected with a macvtap device that resides on the host, which is then connected to the corresponding VF<br />
<br />
==Nova==<br />
Nova support for SR-IOV enables scheduling an instance with SR-IOV ports based on their network connectivity. The neutron ports' associated physical networks have to be considered in making the scheduling decision.<br />
PCI Whitelist has been enchanced to allow tags to be associated with PCI devices. PCI devices available for SR-IOV networking should be tagged with physical_network label.<br />
<br />
For SR-IOV networking, a pre-defined tag "physical_network" is used to define the physical network to which the devices are attached. A whitelist entry is defined as:<br />
["vendor_id": "<id>",] ["product_id": "<id>",]<br />
["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" |<br />
"devname": "Ethernet Interface Name",] <br />
"physical_network":"name string of the physical network"<br />
<br />
<id> can be an asterisk (*) or a valid vendor/product ID as displayed by the Linux utility lspci. The address uses the same syntax as in lspci. The devname can be a valid PCI device name. The only device names that are supported are those displayed by the Linux utility ifconfig -a and correspond to either a PF or a VF on a vNIC.<br />
<br />
If the device defined by the address or devname corresponds to a SR-IOV PF, all VFs under the PF will match the entry.<br />
<br />
Multiple whitelist entries per host are supported.<br />
<br />
==Neutron== <br />
Neutron support for SR-IOV requires ML2 Plugin with SR-IOV supporting mechanism driver.<br />
Currently there is ML2 Mechanism Driver for SR-IOV capable NIC based switching (HW VEB).<br />
There are network adapters from different vendors that vary by supporting various functionality.<br />
If VF link state update is supported by vendor network adapter, the SR-IOV NIC L2 agent should be deployed to leverage this functionality .<br />
<br />
==VM creation flow with SR-IOV vNIC== <br />
* Create one or more neutron ports. Run:<br />
neutron port-create <net-id> --binding:vnic-type <direct | macvtap | normal><br />
<br />
* Boot VM with one or more neutron ports. Run:<br />
nova boot --flavor m1.large --image <image><br />
--nic port-id=<port1> --nic port-id=<port2><br />
<br />
Note that in the nova boot API, users can specify either a port-ID or a net-ID. If a net-ID is specified, it is assumed that the user is requesting a normal virtual port (which is not an SR-IOV port).<br />
<br />
=SR-IOV Configuration=<br />
<br />
<br />
===Neutron Server===<br />
Using ML2 Neutron plugin modify /etc/neutron/plugins/ml2/ml2_conf.ini:<br />
<br />
[ml2]<br />
tenant_network_types = vlan<br />
type_drivers = vlan<br />
mechanism_drivers = openvswitch,sriovnicswitch<br />
[ml2_type_vlan]<br />
network_vlan_ranges = default:2:100<br />
<br />
Make sure /etc/neutron/plugins/ml2/ml2_conf_sriov.ini has the following section:<br />
<br />
[ml2_sriov]<br />
agent_required = True<br />
<br />
Neutron server should be run with the two configuration files /etc/neutron/plugins/ml2/ml2_conf.in and /etc/neutron/plugins/ml2/ml2_conf_sriov.ini <br />
neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini<br />
<br />
==Compute==<br />
===Nova===<br />
On each compute node you have to associate the VFs available to each physical network.<br />
That is performed by configuring pci_passthrough_whitelist in /etc/nova/noca.conf. So, for example:<br />
pci_passthrough_whitelist = {"address":"*:0a:00.*","physical_network":"physnet1"}<br />
This associates any VF with address that includes ':0a:00.' in its address to the physical network physnet1.<br />
<br />
After configuring the whitelist you have to restart nova-compute service.<br />
<br />
===Neutron===<br />
If the hardware supports it and you want to enable changing the port admin_state, you have to run the Neutron SR-IOV agent.<br /><br />
<br />
'''Note:'''If you configured agent_required=True on the Neutron server, you must run the Agent on each compute node.<br />
<br />
In /etc/neutron/plugins/ml2/ml2_conf.ini make sure you have the following:<br />
[securitygroup]<br />
firewall_driver = neutron.agent.firewall.NoopFirewallDriver<br />
<br />
Modify /etc/neutron/plugins/ml2/ml2_conf_sriov.ini as follows:<br />
<br />
[sriov_nic]<br />
physical_device_mappings = physnet1:eth1<br />
exclude_devices =<br />
<br />
Where:<br />
* physnet1 is the physical network<br />
* eth1 is the physical function (PF)<br />
* exclude_devices is empty so all the VFs associated with eth1 may be configured by the agent<br />
<br />
After modifying the configuration file, start the Neutron SR-IOV agent. Run:<br />
neutron-sriov-nic-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini<br />
<br />
====Exclude VFs====<br />
If you want to exclude some of the VFs so the agent does not configure them, you need to list them in the sriov_nic section:<br /><br />
<br />
'''Example:''' exclude_devices = eth1:0000:07:00.2; 0000:07:00.3, eth2:0000:05:00.1; 0000:05:00.2<br />
<br />
=References=<br />
<br />
[http://community.mellanox.com/docs/DOC-1484 Openstack ML2 SR-IOV driver support]</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=SR-IOV-Passthrough-For-Networking&diff=64830SR-IOV-Passthrough-For-Networking2014-10-12T11:33:45Z<p>Amir Naddaf: </p>
<hr />
<div>=SR-IOV Networking in OpenStack Juno= <br />
OpenStack Juno adds inbox support to request VM access to virtual network via SR-IOV NIC. With the introduction of SR-IOV based NICs, the traditional virtual bridge is no longer required. Each SR-IOV port is associated with a virtual function (VF). SR-IOV ports may be provided by Hardware-based Virtual Ethernet Bridging (HW VEB); or they may be extended to an upstream physical switch (IEEE 802.1br). <br />
There are two ways that SR-IOV port may be connected:<br />
* directly connected to its VF<br />
* connected with a macvtap device that resides on the host, which is then connected to the corresponding VF<br />
<br />
==Nova==<br />
Nova support for SR-IOV enables scheduling an instance with SR-IOV ports based on their network connectivity. The neutron ports' associated physical networks have to be considered in making the scheduling decision.<br />
PCI Whitelist has been enchanced to allow tags to be associated with PCI devices. PCI devices available for SR-IOV networking should be tagged with physical_network label.<br />
<br />
For SR-IOV networking, a pre-defined tag "physical_network" is used to define the physical network to which the devices are attached. A whitelist entry is defined as:<br />
["vendor_id": "<id>",] ["product_id": "<id>",]<br />
["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" |<br />
"devname": "Ethernet Interface Name",] <br />
"physical_network":"name string of the physical network"<br />
<br />
<id> can be an asterisk (*) or a valid vendor/product ID as displayed by the Linux utility lspci. The address uses the same syntax as in lspci. The devname can be a valid PCI device name. The only device names that are supported are those displayed by the Linux utility ifconfig -a and correspond to either a PF or a VF on a vNIC.<br />
<br />
If the device defined by the address or devname corresponds to a SR-IOV PF, all VFs under the PF will match the entry.<br />
<br />
Multiple whitelist entries per host are supported.<br />
<br />
==Neutron== <br />
Neutron support for SR-IOV requires ML2 Plugin with SR-IOV supporting mechanism driver.<br />
Currently there is ML2 Mechanism Driver for SR-IOV capable NIC based switching (HW VEB).<br />
There are network adapters from different vendors that vary by supporting various functionality.<br />
If VF link state update is supported by vendor network adapter, the SR-IOV NIC L2 agent should be deployed to leverage this functionality .<br />
<br />
==VM creation flow with SR-IOV vNIC== <br />
* Create one or more neutron ports. Run:<br />
neutron port-create <net-id> --binding:vnic-type <direct | macvtap | normal><br />
<br />
* Boot VM with one or more neutron ports. Run:<br />
nova boot --flavor m1.large --image <image><br />
--nic port-id=<port1> --nic port-id=<port2><br />
<br />
Note that in the nova boot API, users can specify either a port-ID or a net-ID. If a net-ID is specified, it is assumed that the user is requesting a normal virtual port (which is not an SR-IOV port).<br />
<br />
=SR-IOV Configuration=<br />
<br />
<br />
===Neutron Server===<br />
Using ML2 Neutron plugin modify /etc/neutron/plugins/ml2/ml2_conf.ini:<br />
<br />
[ml2]<br />
tenant_network_types = vlan<br />
type_drivers = vlan<br />
mechanism_drivers = openvswitch,sriovnicswitch<br />
[ml2_type_vlan]<br />
network_vlan_ranges = default:2:100<br />
<br />
Make sure /etc/neutron/plugins/ml2/ml2_conf_sriov.ini has the following section:<br />
<br />
[ml2_sriov]<br />
agent_required = True<br />
<br />
Neutron server should be run with the two configuration files /etc/neutron/plugins/ml2/ml2_conf.in and /etc/neutron/plugins/ml2/ml2_conf_sriov.ini <br />
neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini<br />
<br />
==Compute==<br />
===Nova===<br />
On each compute you have to associate the Virtual Functions available to each Physical Network.<br />
You do it by configuring pci_passthrough_whitelist in /etc/nova/noca.conf<br />
<br />
For example:<br />
pci_passthrough_whitelist = {"address":"*:0a:00.*","physical_network":"physnet1"}<br />
This will associate any VF with address that includes ':0a:00.' in it's address to the Physical network physnet1<br />
<br />
After configuring the white list you have to restart nova-compute service.<br />
<br />
===Neutron===<br />
If the hardware supports and and you want to enable changing the port admin_state you have to run the Neutron SR-IOV agent.<br /><br />
<br />
'''Note:'''If you configured agent_required=True on the Neutron server you must run the Agent on each compute node.<br />
<br />
In /etc/neutron/plugins/ml2/ml2_conf.ini make sure you have the following:<br />
[securitygroup]<br />
firewall_driver = neutron.agent.firewall.NoopFirewallDriver<br />
<br />
Modify /etc/neutron/plugins/ml2/ml2_conf_sriov.ini as follows<br />
<br />
[sriov_nic]<br />
physical_device_mappings = physnet1:eth1<br />
exclude_devices =<br />
<br />
physnet1 is the physical network<br />
eth1 if the Physical Function (PF)<br />
exclude_devices is empty so All the VFs associated with eth1 are allowed to be configured by the agent.<br />
<br />
After modifying the configuration file start the Neutron SR-IOV agent:<br />
neutron-sriov-nic-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini<br />
<br />
====Exclude VFs====<br />
If you want to exclude some of the VFs so the agent won't configure them you need to list them in the sriov_nic section:<br /><br />
<br />
'''Example:''' exclude_devices = eth1:0000:07:00.2; 0000:07:00.3, eth2:0000:05:00.1; 0000:05:00.2<br />
<br />
=References=<br />
<br />
[http://community.mellanox.com/docs/DOC-1484 Openstack ML2 SR-IOV driver support]</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=SR-IOV-Passthrough-For-Networking&diff=64829SR-IOV-Passthrough-For-Networking2014-10-12T11:33:05Z<p>Amir Naddaf: /* SR-IOV Networking in OpenStack Juno */</p>
<hr />
<div>=SR-IOV Networking in OpenStack Juno= <br />
OpenStack Juno adds inbox support to request VM access to virtual network via SR-IOV NIC. With the introduction of SR-IOV based NIC, the traditional virtual bridge is no longer required. Each SR-IOV port is associated with a virtual function (VF). SR-IOV ports may be provided by Hardware-based Virtual Eithernet Bridging (HW VEB); or they may be extended to an upstream physical switch (IEEE 802.1br). <br />
There are two ways that SR-IOV port may be connected:<br />
* directly connected to its VF<br />
* connected with a macvtap device that resides on the host, which is then connected to the corresponding VF. <br />
<br />
==Nova==<br />
Nova support for SR-IOV enables to schedule an instance with SR-IOV ports based on their network connectivity. The neutron ports' associated physical networks have to be used in making the scheduling decision.<br />
PCI Whitelist was enchanced to allow tags to be associated with PCI devices. PCI devices available for SR-IOV networking should be tagged with physical_network label.<br />
<br />
For SR-IOV networking, a pre-defined tag "physical_network" is used to define the physical network that the devices are attached to. A whitelist entry is defined as:<br />
["vendor_id": "<id>",] ["product_id": "<id>",]<br />
["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" |<br />
"devname": "Ethernet Interface Name",] <br />
"physical_network":"name string of the physical network"<br />
<br />
<id> can be a '*' or a valid vendor/product id as displayed by the linux utility lspci. The address uses the same syntax as it's in lspci. The devname can be a valid PCI device name. The only device names that are supportedare those that are displayed by the linux utility ifconfig -a and correspond to either a PF or a VF on a vNIC.<br />
<br />
If the device defined by the address or devname corresponds to a SR-IOV PF, all the VFs under the PF will match the entry.<br />
<br />
Multiple whitelist entries per host are supported.<br />
<br />
==Neutron== <br />
Neutron support for SR-IOV requires ML2 Plugin with SR-IOV supporting Mechanism Driver.<br />
Currently there is ML2 Mechanism Driver for SR-IOV capable NIC based switching (HW VEB).<br />
There are Network Adapters from different vendors that may differ by supporting various functionality.<br />
If VF link state update is supported by vendor Network Adapter, the SR-IOV NIC L2 Agent should be deployed to leverage this functionality .<br />
<br />
==VM creation flow with SR-IOV vNIC== <br />
* Create one or more neutron ports<br />
neutron port-create <net-id> --binding:vnic-type <direct | macvtap | normal><br />
<br />
* Boot VM with one or more neutron ports<br />
nova boot --flavor m1.large --image <image><br />
--nic port-id=<port1> --nic port-id=<port2><br />
<br />
Note that in the nova boot API, users can specify either a port-id or a net-id. If it's the latter case, it's assumed that the user is requesting a normal virtual port (which is not a SR-IOV port).<br />
<br />
=SR-IOV Configuration=<br />
<br />
<br />
===Neutron Server===<br />
Using ML2 Neutron plugin modify /etc/neutron/plugins/ml2/ml2_conf.ini:<br />
<br />
[ml2]<br />
tenant_network_types = vlan<br />
type_drivers = vlan<br />
mechanism_drivers = openvswitch,sriovnicswitch<br />
[ml2_type_vlan]<br />
network_vlan_ranges = default:2:100<br />
<br />
Make sure /etc/neutron/plugins/ml2/ml2_conf_sriov.ini has the following section:<br />
<br />
[ml2_sriov]<br />
agent_required = True<br />
<br />
Neutron server should be run with the two configuration files /etc/neutron/plugins/ml2/ml2_conf.in and /etc/neutron/plugins/ml2/ml2_conf_sriov.ini <br />
neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini<br />
<br />
==Compute==<br />
===Nova===<br />
On each compute you have to associate the Virtual Functions available to each Physical Network.<br />
You do it by configuring pci_passthrough_whitelist in /etc/nova/noca.conf<br />
<br />
For example:<br />
pci_passthrough_whitelist = {"address":"*:0a:00.*","physical_network":"physnet1"}<br />
This will associate any VF with address that includes ':0a:00.' in it's address to the Physical network physnet1<br />
<br />
After configuring the white list you have to restart nova-compute service.<br />
<br />
===Neutron===<br />
If the hardware supports and and you want to enable changing the port admin_state you have to run the Neutron SR-IOV agent.<br /><br />
<br />
'''Note:'''If you configured agent_required=True on the Neutron server you must run the Agent on each compute node.<br />
<br />
In /etc/neutron/plugins/ml2/ml2_conf.ini make sure you have the following:<br />
[securitygroup]<br />
firewall_driver = neutron.agent.firewall.NoopFirewallDriver<br />
<br />
Modify /etc/neutron/plugins/ml2/ml2_conf_sriov.ini as follows<br />
<br />
[sriov_nic]<br />
physical_device_mappings = physnet1:eth1<br />
exclude_devices =<br />
<br />
physnet1 is the physical network<br />
eth1 if the Physical Function (PF)<br />
exclude_devices is empty so All the VFs associated with eth1 are allowed to be configured by the agent.<br />
<br />
After modifying the configuration file start the Neutron SR-IOV agent:<br />
neutron-sriov-nic-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini<br />
<br />
====Exclude VFs====<br />
If you want to exclude some of the VFs so the agent won't configure them you need to list them in the sriov_nic section:<br /><br />
<br />
'''Example:''' exclude_devices = eth1:0000:07:00.2; 0000:07:00.3, eth2:0000:05:00.1; 0000:05:00.2<br />
<br />
=References=<br />
<br />
[http://community.mellanox.com/docs/DOC-1484 Openstack ML2 SR-IOV driver support]</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=SR-IOV-Passthrough-For-Networking&diff=64828SR-IOV-Passthrough-For-Networking2014-10-12T11:32:41Z<p>Amir Naddaf: </p>
<hr />
<div>=SR-IOV Networking in OpenStack Juno= <br />
OpenStack Juno adds inbox support to request VM access to virtual network via SR-IOV NIC. With the introduction of SR-IOV based NIC, the traditional virtual bridge is no longer required. Each SR-IOV port is associated with a virtual function (VF). SR-IOV ports may be provided by Hardware-based Virtual Eithernet Bridging (HW VEB); or they may be extended to an upstream physical switch (IEEE 802.1br). <br />
There are two ways that SR-IOV port may be connected:<br />
* directly connected to its VF<br />
* connected with a macvtap device that resides on the host, which is then connected to the corresponding VF. <br />
<br />
==Nova==<br />
Nova support for SR-IOV enables to schedule an instance with SR-IOV ports based on their network connectivity. The neutron ports' associated physical networks have to be used in making the scheduling decision.<br />
PCI Whitelist was enchanced to allow tags to be associated with PCI devices. PCI devices available for SR-IOV networking should be tagged with physical_network label.<br />
<br />
For SR-IOV networking, a pre-defined tag "physical_network" is used to define the physical network that the devices are attached to. A whitelist entry is defined as:<br />
["vendor_id": "<id>",] ["product_id": "<id>",]<br />
["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" |<br />
"devname": "Ethernet Interface Name",] <br />
"physical_network":"name string of the physical network"<br />
<br />
<id> can be a '*' or a valid vendor/product id as displayed by the linux utility lspci. The address uses the same syntax as it's in lspci. The devname can be a valid PCI device name. The only device names that are supportedare those that are displayed by the linux utility ifconfig -a and correspond to either a PF or a VF on a vNIC.<br />
<br />
If the device defined by the address or devname corresponds to a SR-IOV PF, all the VFs under the PF will match the entry.<br />
<br />
Multiple whitelist entries per host are supported.<br />
<br />
==Neutron== <br />
Neutron support for SR-IOV requires ML2 Plugin with SR-IOV supporting Mechanism Driver.<br />
Currently there is ML2 Mechanism Driver for SR-IOV capable NIC based switching (HW VEB).<br />
There are Network Adapters from different vendors that may differ by supporting various functionality.<br />
If VF link state update is supported by vendor Network Adapter, the SR-IOV NIC L2 Agent should be deployed to leverage this functionality .<br />
<br />
==VM creation flow with SR-IOV vNIC== <br />
* Create one or more neutron ports<br />
neutron port-create <net-id> --binding:vnic-type <direct | macvtap | normal><br />
<br />
* Boot VM with one or more neutron ports<br />
nova boot --flavor m1.large --image <image><br />
--nic port-id=<port1> --nic port-id=<port2><br />
<br />
Note that in the nova boot API, users can specify either a port-id or a net-id. If it's the latter case, it's assumed that the user is requesting a normal virtual port (which is not a SR-IOV port).<br />
<br />
=SR-IOV Configuration=<br />
<br />
<br />
===Neutron Server===<br />
Using ML2 Neutron plugin modify /etc/neutron/plugins/ml2/ml2_conf.ini:<br />
<br />
[ml2]<br />
tenant_network_types = vlan<br />
type_drivers = vlan<br />
mechanism_drivers = openvswitch,sriovnicswitch<br />
[ml2_type_vlan]<br />
network_vlan_ranges = default:2:100<br />
<br />
Make sure /etc/neutron/plugins/ml2/ml2_conf_sriov.ini has the following section:<br />
<br />
[ml2_sriov]<br />
agent_required = True<br />
<br />
Neutron server should be run with the two configuration files /etc/neutron/plugins/ml2/ml2_conf.in and /etc/neutron/plugins/ml2/ml2_conf_sriov.ini <br />
neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini<br />
<br />
==Compute==<br />
===Nova===<br />
On each compute you have to associate the Virtual Functions available to each Physical Network.<br />
You do it by configuring pci_passthrough_whitelist in /etc/nova/noca.conf<br />
<br />
For example:<br />
pci_passthrough_whitelist = {"address":"*:0a:00.*","physical_network":"physnet1"}<br />
This will associate any VF with address that includes ':0a:00.' in it's address to the Physical network physnet1<br />
<br />
After configuring the white list you have to restart nova-compute service.<br />
<br />
===Neutron===<br />
If the hardware supports and and you want to enable changing the port admin_state you have to run the Neutron SR-IOV agent.<br /><br />
<br />
'''Note:'''If you configured agent_required=True on the Neutron server you must run the Agent on each compute node.<br />
<br />
In /etc/neutron/plugins/ml2/ml2_conf.ini make sure you have the following:<br />
[securitygroup]<br />
firewall_driver = neutron.agent.firewall.NoopFirewallDriver<br />
<br />
Modify /etc/neutron/plugins/ml2/ml2_conf_sriov.ini as follows<br />
<br />
[sriov_nic]<br />
physical_device_mappings = physnet1:eth1<br />
exclude_devices =<br />
<br />
physnet1 is the physical network<br />
eth1 if the Physical Function (PF)<br />
exclude_devices is empty so All the VFs associated with eth1 are allowed to be configured by the agent.<br />
<br />
After modifying the configuration file start the Neutron SR-IOV agent:<br />
neutron-sriov-nic-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini<br />
<br />
====Exclude VFs====<br />
If you want to exclude some of the VFs so the agent won't configure them you need to list them in the sriov_nic section:<br /><br />
<br />
'''Example:''' exclude_devices = eth1:0000:07:00.2; 0000:07:00.3, eth2:0000:05:00.1; 0000:05:00.2<br />
<br />
=References=<br />
<br />
[http://community.mellanox.com/docs/DOC-1484 Openstack ML2 SR-IOV driver support]</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Cinder&diff=18517Mellanox-Cinder2013-02-28T15:42:30Z<p>Amir Naddaf: /* References */</p>
<hr />
<div>= Overview = <br />
iSER (iSCSI over RDMA) Mellanox OpenStack support to Cinder<br />
<br />
= Installation =<br />
In order to add iSER support to Cinder perform the following operations:<br />
<br />
1. Update the two flags in /etc/cinder/cinder.conf.<br />
transport = iser (by default will be iscsi)<br />
iser_ip_address = 192.168.20.140 (by default will be "iscsi_ip_address")<br />
<br />
2. Replace the files under cinder/volume.<br />
<br />
* driver.py<br />
* iscsi.py<br />
<br />
<br />
3. Replace the file under nova/virt/libvirt.<br />
<br />
* volume.py<br />
<br />
<br />
<br />
For additional details refer to "*.patch" files.<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ OpenStack solution page at Mellanox site]<br />
<br />
2. [http://www.mellanox.com/openstack/ Source repository ]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED web page]<br />
<br />
For more details, please refer any inquiries to [mailto:openstack@mellanox.com openstack@mellanox.com].<br />
<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Cinder&diff=18516Mellanox-Cinder2013-02-28T15:41:57Z<p>Amir Naddaf: /* Installation */</p>
<hr />
<div>= Overview = <br />
iSER (iSCSI over RDMA) Mellanox OpenStack support to Cinder<br />
<br />
= Installation =<br />
In order to add iSER support to Cinder perform the following operations:<br />
<br />
1. Update the two flags in /etc/cinder/cinder.conf.<br />
transport = iser (by default will be iscsi)<br />
iser_ip_address = 192.168.20.140 (by default will be "iscsi_ip_address")<br />
<br />
2. Replace the files under cinder/volume.<br />
<br />
* driver.py<br />
* iscsi.py<br />
<br />
<br />
3. Replace the file under nova/virt/libvirt.<br />
<br />
* volume.py<br />
<br />
<br />
<br />
For additional details refer to "*.patch" files.<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ OpenStack solution page at Mellanox site]<br />
<br />
2. [http://www.mellanox.com/openstack/ Source repository ]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED web page]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Cinder&diff=18515Mellanox-Cinder2013-02-28T15:41:02Z<p>Amir Naddaf: /* Installation */</p>
<hr />
<div>= Overview = <br />
iSER (iSCSI over RDMA) Mellanox OpenStack support to Cinder<br />
<br />
= Installation =<br />
In order to add iSER support to Cinder perform the following operations:<br />
<br />
1. Update the two flags in /etc/cinder/cinder.conf.<br />
transport = iser (by default will be iscsi)<br />
iser_ip_address = 192.168.20.140 (by default will be "iscsi_ip_address")<br />
<br />
2. Replace the files under cinder/volume:<br />
<br />
* driver.py<br />
* iscsi.py<br />
<br />
<br />
3. Replace the file under "nova/virt/libvirt":<br />
<br />
* volume.py<br />
<br />
<br />
<br />
For additional details refer to "*.patch" files.<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ OpenStack solution page at Mellanox site]<br />
<br />
2. [http://www.mellanox.com/openstack/ Source repository ]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED web page]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Quantum&diff=18514Mellanox-Quantum2013-02-28T15:37:24Z<p>Amir Naddaf: /* Quantum Configuration */</p>
<hr />
<div><br />
<br />
= Overview = <br />
== Mellanox Quantum Plugin ==<br />
The Openstack Mellanox Quantum plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. <br />
Mellanox Quantum Plugin allows hardware vNICs (based on SR-IOV virtual functions) per each Virtual Machine vNIC to have its unique <br />
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).<br />
<br />
Hardware-based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.<br />
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality <br />
<br />
This plugin is implemented according to Plugin-Agent pattern.<br />
<br />
<br />
+-----------------+ +--------------+<br />
| Controller node | | Compute node |<br />
+-----------------------------------+ +-----------------------------------+<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
| | | | | | | | | zmq | | |<br />
| | Openstack | v2.0 | Mellanox | | RPC | | Mellanox |REQ/REP| Mellanox | |<br />
| | Quantum +------+ Quantum +-----------+ Quantum +-------+ Embedded | |<br />
| | | | Plugin | | | | Agent | | Switch | |<br />
| | | | | | | | | | (NIC) | |<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
+-----------------------------------+ +-----------------------------------+<br />
<br />
* Openstack Mellanox Quantum Plugin implements the Quantum v2.0 API.<br />
* Mellanox Quantum Plugin processes the Quantum API calls and manages network segmentation ID allocation. <br />
* The plugin uses databases to store configuration and allocation mapping.<br />
* The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.<br />
* Mellanox Openstack Quantum Agent (L2 Agent) runs on each compute node. <br />
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.<br />
<br />
== Mellanox Nova VIF Driver ==<br />
The Mellanox Nova VIF driver should be used when running Mellanox Quantum Plugin. This driver supports the VIF plugin by binding vNIC (para-virtualized or SR-IOV with optional RDMA guest access) to the embedded switch port.<br />
<br />
== Prerequisites ==<br />
The following are the Mellanox Quantum Plugin prerequisites:<br />
<br />
1. Compute nodes should be equiped with Mellanox ConnectX®-2/ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])<br />
<br />
2. Mellanox OFED 2.0 is installed. Contact [mailto:openstack@mellanox.com?subject=MLNX_OFED2.0 openstack@mellanox.com] to retreive this version.<br />
Refer to Mellanox website for the latest OFED documentation ([http://www.mellanox.com/page/products_dyn?product_family=26 link])<br />
<br />
3. The software package python-zmq ([https://github.com/zeromq/pyzmq github]) must be installed.<br />
<br />
4. The software package iproute2 - ([http://www.linuxgrill.com/anonymous/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation]) must be installed.<br />
<br />
5. The software package ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code]) must be installed.<br />
<br />
6. RH 6.3 or above.<br />
<br />
7. Ubuntu 11.10 or above (Future).<br />
<br />
== Code Structure ==<br />
<br />
Mellanox Quantum Plugin and the Nova VIF driver are located at [http://github.com/mellanox-openstack Mellanox openStack]<br />
<br />
1. Quantum Plugin package structure:<br />
quantum/etc/quantum/plugins/mlnx -plugin configuration<br />
mlnx_conf.ini - sample plugin configuration<br />
<br />
quantum/quantum/plugins/mlnx - plugin code<br />
/agent - Agent code<br />
/common - common code<br />
/db - plugin persistency model and wrapping methods<br />
mlnx_plugin.py - Mellanox Openstack Plugin<br />
rpc_callbacks.py - RPC handler for received messages<br />
agent_notify_api.py - Agent RPC notify methods<br />
<br />
Mellanox Quantum Plugin is located under /quantum/quantum/plugins/.<br />
<br />
2. Nova VIF driver package structure is:<br />
nova/nova/mlnx - nova vif driver code<br />
<br />
Mellanox Nova VIF driver is located under /nova/virt/libvirt/.<br />
<br />
= Mellanox Quantum Plugin Installation =<br />
== On the Quantum Server Node ==<br />
1. Copy Mellanox OpenStack plugin to the installed quantum plugin directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins).<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins<br />
<br />
2. Modify the /etc/quantum/quantum.conf file.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
3. Copy the Mellanox plugin configuration. <br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.<br />
<br />
5. Run the server.<br />
#quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
or <br />
#/etc/init.d/quantum-server start<br />
<br />
== On Compute Nodes ==<br />
<br />
=== Nova-compute ===<br />
<br />
1. Copy the Nova Mellanox VIF driver.<br />
#cp -a mellanox-quantum-plugin/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt<br />
<br />
2. Modify nova.conf.<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
vnic_type=direct - can be either 'direct' or 'hostdev'<br />
fabric=default - specifies physical network for vNICs (currently support one fabric per node)<br />
<br />
3. Restart Nova.<br />
<br />
=== The eswitchd Daemon ===<br />
1. Copy daemon files.<br />
#cp -a daemon /opt/mlnx_daemon<br />
<br />
2. Copy the configuration file and modify it according to your environment.<br />
#mkdir /etc/mlnx_daemon<br />
#cp /opt/mlnx_daemon/etc/mlnx_daemon.conf /etc/mlnx_daemon<br />
<br />
3. Run the daemon:<br />
/opt/mlnx_daemon/eswitch_daemon.py<br />
<br />
=== Quantum Agent ===<br />
1. Copy Mellanox OpenStack agent.<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.6/site-packages/quantum/plugins<br />
<br />
2. Copy the quantum.conf and mlnx_conf.ini file to the compute node.<br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
3. Modify the Quantum Agent configuration at /etc/quantum/plugins/mlnx/mlnx_conf.ini.<br />
<br />
4. Run the agent.<br />
#python /usr/lib/python2.6/site-packages/quantum/plugins/mlnx/agent/eswitch_quantum_agent.py --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
= Mellanox Quantum Plugin Configuration =<br />
== Quantum Configuration ==<br />
1. Make the Mellanox plugin the current quantum plugin by editing quantum.conf and changing core_plugin.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
2. Database configuration: Install MySQL on the central server. Create a database named "quantum".<br />
<br />
3. Plugin configuration:<br />
Edit the configuration file: etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
On central server node<br />
<br />
[DATABASE]<br />
sql_connection - must match the mysql configuration<br />
<br />
[VLANS]<br />
tenant_network_type - must be set on of supported tenant network types<br />
network_vlan_ranges - must be configured to specify the names of the physical networks<br />
managed by the mellanox plugin, along with the ranges of VLAN IDs<br />
available on each physical network for allocation to virtual networks. <br />
<br />
On compute node(s)<br />
<br />
[AGENT]<br />
polling_interval - interfval to poll for existing vNICs<br />
rpc - must be set to True<br />
<br />
[ESWITCH]<br />
physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. <br />
<br />
For a plugin configuration file example, please refer to [http://github.com/mellanox-openstack/mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini Mellanox config *ini file].<br />
<br />
== Nova Configuration (Compute Node(s)) ==<br />
------------------------------------<br />
Edit the nova.conf file.<br />
1. Configure the vif driver, and libvirt/vif type<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
connection_type=libvirt<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
2. Configure vnic_type ('direct' or 'hostdev').<br />
vnic_type= direct <br />
3. Define the embedded switch-managed physical network (currently single fabric on node).<br />
fabric=default - specifies physical network for vNICs<br />
4. Enable DHCP server to allow VMs to acquire IPs.<br />
quantum_use_dhcp=true<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]<br />
<br />
2. [http://www.mellanox.com/openstack/ https://github.com/mellanox-openstack]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Quantum&diff=18513Mellanox-Quantum2013-02-28T15:36:24Z<p>Amir Naddaf: /* Nova Configuration (compute node(s)) */</p>
<hr />
<div><br />
<br />
= Overview = <br />
== Mellanox Quantum Plugin ==<br />
The Openstack Mellanox Quantum plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. <br />
Mellanox Quantum Plugin allows hardware vNICs (based on SR-IOV virtual functions) per each Virtual Machine vNIC to have its unique <br />
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).<br />
<br />
Hardware-based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.<br />
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality <br />
<br />
This plugin is implemented according to Plugin-Agent pattern.<br />
<br />
<br />
+-----------------+ +--------------+<br />
| Controller node | | Compute node |<br />
+-----------------------------------+ +-----------------------------------+<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
| | | | | | | | | zmq | | |<br />
| | Openstack | v2.0 | Mellanox | | RPC | | Mellanox |REQ/REP| Mellanox | |<br />
| | Quantum +------+ Quantum +-----------+ Quantum +-------+ Embedded | |<br />
| | | | Plugin | | | | Agent | | Switch | |<br />
| | | | | | | | | | (NIC) | |<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
+-----------------------------------+ +-----------------------------------+<br />
<br />
* Openstack Mellanox Quantum Plugin implements the Quantum v2.0 API.<br />
* Mellanox Quantum Plugin processes the Quantum API calls and manages network segmentation ID allocation. <br />
* The plugin uses databases to store configuration and allocation mapping.<br />
* The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.<br />
* Mellanox Openstack Quantum Agent (L2 Agent) runs on each compute node. <br />
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.<br />
<br />
== Mellanox Nova VIF Driver ==<br />
The Mellanox Nova VIF driver should be used when running Mellanox Quantum Plugin. This driver supports the VIF plugin by binding vNIC (para-virtualized or SR-IOV with optional RDMA guest access) to the embedded switch port.<br />
<br />
== Prerequisites ==<br />
The following are the Mellanox Quantum Plugin prerequisites:<br />
<br />
1. Compute nodes should be equiped with Mellanox ConnectX®-2/ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])<br />
<br />
2. Mellanox OFED 2.0 is installed. Contact [mailto:openstack@mellanox.com?subject=MLNX_OFED2.0 openstack@mellanox.com] to retreive this version.<br />
Refer to Mellanox website for the latest OFED documentation ([http://www.mellanox.com/page/products_dyn?product_family=26 link])<br />
<br />
3. The software package python-zmq ([https://github.com/zeromq/pyzmq github]) must be installed.<br />
<br />
4. The software package iproute2 - ([http://www.linuxgrill.com/anonymous/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation]) must be installed.<br />
<br />
5. The software package ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code]) must be installed.<br />
<br />
6. RH 6.3 or above.<br />
<br />
7. Ubuntu 11.10 or above (Future).<br />
<br />
== Code Structure ==<br />
<br />
Mellanox Quantum Plugin and the Nova VIF driver are located at [http://github.com/mellanox-openstack Mellanox openStack]<br />
<br />
1. Quantum Plugin package structure:<br />
quantum/etc/quantum/plugins/mlnx -plugin configuration<br />
mlnx_conf.ini - sample plugin configuration<br />
<br />
quantum/quantum/plugins/mlnx - plugin code<br />
/agent - Agent code<br />
/common - common code<br />
/db - plugin persistency model and wrapping methods<br />
mlnx_plugin.py - Mellanox Openstack Plugin<br />
rpc_callbacks.py - RPC handler for received messages<br />
agent_notify_api.py - Agent RPC notify methods<br />
<br />
Mellanox Quantum Plugin is located under /quantum/quantum/plugins/.<br />
<br />
2. Nova VIF driver package structure is:<br />
nova/nova/mlnx - nova vif driver code<br />
<br />
Mellanox Nova VIF driver is located under /nova/virt/libvirt/.<br />
<br />
= Mellanox Quantum Plugin Installation =<br />
== On the Quantum Server Node ==<br />
1. Copy Mellanox OpenStack plugin to the installed quantum plugin directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins).<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins<br />
<br />
2. Modify the /etc/quantum/quantum.conf file.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
3. Copy the Mellanox plugin configuration. <br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.<br />
<br />
5. Run the server.<br />
#quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
or <br />
#/etc/init.d/quantum-server start<br />
<br />
== On Compute Nodes ==<br />
<br />
=== Nova-compute ===<br />
<br />
1. Copy the Nova Mellanox VIF driver.<br />
#cp -a mellanox-quantum-plugin/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt<br />
<br />
2. Modify nova.conf.<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
vnic_type=direct - can be either 'direct' or 'hostdev'<br />
fabric=default - specifies physical network for vNICs (currently support one fabric per node)<br />
<br />
3. Restart Nova.<br />
<br />
=== The eswitchd Daemon ===<br />
1. Copy daemon files.<br />
#cp -a daemon /opt/mlnx_daemon<br />
<br />
2. Copy the configuration file and modify it according to your environment.<br />
#mkdir /etc/mlnx_daemon<br />
#cp /opt/mlnx_daemon/etc/mlnx_daemon.conf /etc/mlnx_daemon<br />
<br />
3. Run the daemon:<br />
/opt/mlnx_daemon/eswitch_daemon.py<br />
<br />
=== Quantum Agent ===<br />
1. Copy Mellanox OpenStack agent.<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.6/site-packages/quantum/plugins<br />
<br />
2. Copy the quantum.conf and mlnx_conf.ini file to the compute node.<br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
3. Modify the Quantum Agent configuration at /etc/quantum/plugins/mlnx/mlnx_conf.ini.<br />
<br />
4. Run the agent.<br />
#python /usr/lib/python2.6/site-packages/quantum/plugins/mlnx/agent/eswitch_quantum_agent.py --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
= Mellanox Quantum Plugin Configuration =<br />
== Quantum Configuration ==<br />
1. Make the Mellanox plugin the current quantum plugin by editing quantum.conf and changing core_plugin.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
2. Database configuration: Install MySQL on the central server. Create a database named "quantum".<br />
<br />
3. Plugin configuration:<br />
Edit the configuration file: etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
On central server node<br />
<br />
[DATABASE]<br />
sql_connection - must match the mysql configuration<br />
<br />
[VLANS]<br />
tenant_network_type - must be set on of supported tenant network types<br />
network_vlan_ranges - must be configured to specify the names of the physical networks<br />
managed by the mellanox plugin, along with the ranges of VLAN IDs<br />
available on each physical network for allocation to virtual networks. <br />
<br />
On compute node(s)<br />
<br />
[AGENT]<br />
polling_interval - interfval to poll for existing vNICs<br />
rpc - must be set to True<br />
<br />
[ESWITCH]<br />
physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. <br />
<br />
For a plugin configuration file example, please refer to [http://github.com/mellanox-openstack/mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini Mellanox config *ini file]<br />
<br />
== Nova Configuration (Compute Node(s)) ==<br />
------------------------------------<br />
Edit the nova.conf file.<br />
1. Configure the vif driver, and libvirt/vif type<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
connection_type=libvirt<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
2. Configure vnic_type ('direct' or 'hostdev').<br />
vnic_type= direct <br />
3. Define the embedded switch-managed physical network (currently single fabric on node).<br />
fabric=default - specifies physical network for vNICs<br />
4. Enable DHCP server to allow VMs to acquire IPs.<br />
quantum_use_dhcp=true<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]<br />
<br />
2. [http://www.mellanox.com/openstack/ https://github.com/mellanox-openstack]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Quantum&diff=18512Mellanox-Quantum2013-02-28T15:32:56Z<p>Amir Naddaf: /* Quantum Configuration */</p>
<hr />
<div><br />
<br />
= Overview = <br />
== Mellanox Quantum Plugin ==<br />
The Openstack Mellanox Quantum plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. <br />
Mellanox Quantum Plugin allows hardware vNICs (based on SR-IOV virtual functions) per each Virtual Machine vNIC to have its unique <br />
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).<br />
<br />
Hardware-based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.<br />
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality <br />
<br />
This plugin is implemented according to Plugin-Agent pattern.<br />
<br />
<br />
+-----------------+ +--------------+<br />
| Controller node | | Compute node |<br />
+-----------------------------------+ +-----------------------------------+<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
| | | | | | | | | zmq | | |<br />
| | Openstack | v2.0 | Mellanox | | RPC | | Mellanox |REQ/REP| Mellanox | |<br />
| | Quantum +------+ Quantum +-----------+ Quantum +-------+ Embedded | |<br />
| | | | Plugin | | | | Agent | | Switch | |<br />
| | | | | | | | | | (NIC) | |<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
+-----------------------------------+ +-----------------------------------+<br />
<br />
* Openstack Mellanox Quantum Plugin implements the Quantum v2.0 API.<br />
* Mellanox Quantum Plugin processes the Quantum API calls and manages network segmentation ID allocation. <br />
* The plugin uses databases to store configuration and allocation mapping.<br />
* The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.<br />
* Mellanox Openstack Quantum Agent (L2 Agent) runs on each compute node. <br />
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.<br />
<br />
== Mellanox Nova VIF Driver ==<br />
The Mellanox Nova VIF driver should be used when running Mellanox Quantum Plugin. This driver supports the VIF plugin by binding vNIC (para-virtualized or SR-IOV with optional RDMA guest access) to the embedded switch port.<br />
<br />
== Prerequisites ==<br />
The following are the Mellanox Quantum Plugin prerequisites:<br />
<br />
1. Compute nodes should be equiped with Mellanox ConnectX®-2/ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])<br />
<br />
2. Mellanox OFED 2.0 is installed. Contact [mailto:openstack@mellanox.com?subject=MLNX_OFED2.0 openstack@mellanox.com] to retreive this version.<br />
Refer to Mellanox website for the latest OFED documentation ([http://www.mellanox.com/page/products_dyn?product_family=26 link])<br />
<br />
3. The software package python-zmq ([https://github.com/zeromq/pyzmq github]) must be installed.<br />
<br />
4. The software package iproute2 - ([http://www.linuxgrill.com/anonymous/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation]) must be installed.<br />
<br />
5. The software package ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code]) must be installed.<br />
<br />
6. RH 6.3 or above.<br />
<br />
7. Ubuntu 11.10 or above (Future).<br />
<br />
== Code Structure ==<br />
<br />
Mellanox Quantum Plugin and the Nova VIF driver are located at [http://github.com/mellanox-openstack Mellanox openStack]<br />
<br />
1. Quantum Plugin package structure:<br />
quantum/etc/quantum/plugins/mlnx -plugin configuration<br />
mlnx_conf.ini - sample plugin configuration<br />
<br />
quantum/quantum/plugins/mlnx - plugin code<br />
/agent - Agent code<br />
/common - common code<br />
/db - plugin persistency model and wrapping methods<br />
mlnx_plugin.py - Mellanox Openstack Plugin<br />
rpc_callbacks.py - RPC handler for received messages<br />
agent_notify_api.py - Agent RPC notify methods<br />
<br />
Mellanox Quantum Plugin is located under /quantum/quantum/plugins/.<br />
<br />
2. Nova VIF driver package structure is:<br />
nova/nova/mlnx - nova vif driver code<br />
<br />
Mellanox Nova VIF driver is located under /nova/virt/libvirt/.<br />
<br />
= Mellanox Quantum Plugin Installation =<br />
== On the Quantum Server Node ==<br />
1. Copy Mellanox OpenStack plugin to the installed quantum plugin directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins).<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins<br />
<br />
2. Modify the /etc/quantum/quantum.conf file.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
3. Copy the Mellanox plugin configuration. <br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.<br />
<br />
5. Run the server.<br />
#quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
or <br />
#/etc/init.d/quantum-server start<br />
<br />
== On Compute Nodes ==<br />
<br />
=== Nova-compute ===<br />
<br />
1. Copy the Nova Mellanox VIF driver.<br />
#cp -a mellanox-quantum-plugin/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt<br />
<br />
2. Modify nova.conf.<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
vnic_type=direct - can be either 'direct' or 'hostdev'<br />
fabric=default - specifies physical network for vNICs (currently support one fabric per node)<br />
<br />
3. Restart Nova.<br />
<br />
=== The eswitchd Daemon ===<br />
1. Copy daemon files.<br />
#cp -a daemon /opt/mlnx_daemon<br />
<br />
2. Copy the configuration file and modify it according to your environment.<br />
#mkdir /etc/mlnx_daemon<br />
#cp /opt/mlnx_daemon/etc/mlnx_daemon.conf /etc/mlnx_daemon<br />
<br />
3. Run the daemon:<br />
/opt/mlnx_daemon/eswitch_daemon.py<br />
<br />
=== Quantum Agent ===<br />
1. Copy Mellanox OpenStack agent.<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.6/site-packages/quantum/plugins<br />
<br />
2. Copy the quantum.conf and mlnx_conf.ini file to the compute node.<br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
3. Modify the Quantum Agent configuration at /etc/quantum/plugins/mlnx/mlnx_conf.ini.<br />
<br />
4. Run the agent.<br />
#python /usr/lib/python2.6/site-packages/quantum/plugins/mlnx/agent/eswitch_quantum_agent.py --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
= Mellanox Quantum Plugin Configuration =<br />
== Quantum Configuration ==<br />
1. Make the Mellanox plugin the current quantum plugin by editing quantum.conf and changing core_plugin.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
2. Database configuration: Install MySQL on the central server. Create a database named "quantum".<br />
<br />
3. Plugin configuration:<br />
Edit the configuration file: etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
On central server node<br />
<br />
[DATABASE]<br />
sql_connection - must match the mysql configuration<br />
<br />
[VLANS]<br />
tenant_network_type - must be set on of supported tenant network types<br />
network_vlan_ranges - must be configured to specify the names of the physical networks<br />
managed by the mellanox plugin, along with the ranges of VLAN IDs<br />
available on each physical network for allocation to virtual networks. <br />
<br />
On compute node(s)<br />
<br />
[AGENT]<br />
polling_interval - interfval to poll for existing vNICs<br />
rpc - must be set to True<br />
<br />
[ESWITCH]<br />
physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. <br />
<br />
For a plugin configuration file example, please refer to [http://github.com/mellanox-openstack/mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini Mellanox config *ini file]<br />
<br />
== Nova Configuration (compute node(s)) ==<br />
------------------------------------<br />
Edit the nova.conf file <br />
1. Configure the vif driver, and libvirt/vif type<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
connection_type=libvirt<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
2. Configure vnic_type ('direct' or 'hostdev')<br />
vnic_type= direct <br />
3. Define Embedded Switch managed physical network (currently single fabric on node)<br />
fabric=default - specifies physical network for vNICs<br />
4. Enable DHCP server to allow VMs to acquire IPs <br />
quantum_use_dhcp=true<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]<br />
<br />
2. [http://www.mellanox.com/openstack/ https://github.com/mellanox-openstack]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Quantum&diff=18511Mellanox-Quantum2013-02-28T15:25:28Z<p>Amir Naddaf: /* Mellanox Quantum Plugin Configuration */</p>
<hr />
<div><br />
<br />
= Overview = <br />
== Mellanox Quantum Plugin ==<br />
The Openstack Mellanox Quantum plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. <br />
Mellanox Quantum Plugin allows hardware vNICs (based on SR-IOV virtual functions) per each Virtual Machine vNIC to have its unique <br />
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).<br />
<br />
Hardware-based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.<br />
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality <br />
<br />
This plugin is implemented according to Plugin-Agent pattern.<br />
<br />
<br />
+-----------------+ +--------------+<br />
| Controller node | | Compute node |<br />
+-----------------------------------+ +-----------------------------------+<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
| | | | | | | | | zmq | | |<br />
| | Openstack | v2.0 | Mellanox | | RPC | | Mellanox |REQ/REP| Mellanox | |<br />
| | Quantum +------+ Quantum +-----------+ Quantum +-------+ Embedded | |<br />
| | | | Plugin | | | | Agent | | Switch | |<br />
| | | | | | | | | | (NIC) | |<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
+-----------------------------------+ +-----------------------------------+<br />
<br />
* Openstack Mellanox Quantum Plugin implements the Quantum v2.0 API.<br />
* Mellanox Quantum Plugin processes the Quantum API calls and manages network segmentation ID allocation. <br />
* The plugin uses databases to store configuration and allocation mapping.<br />
* The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.<br />
* Mellanox Openstack Quantum Agent (L2 Agent) runs on each compute node. <br />
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.<br />
<br />
== Mellanox Nova VIF Driver ==<br />
The Mellanox Nova VIF driver should be used when running Mellanox Quantum Plugin. This driver supports the VIF plugin by binding vNIC (para-virtualized or SR-IOV with optional RDMA guest access) to the embedded switch port.<br />
<br />
== Prerequisites ==<br />
The following are the Mellanox Quantum Plugin prerequisites:<br />
<br />
1. Compute nodes should be equiped with Mellanox ConnectX®-2/ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])<br />
<br />
2. Mellanox OFED 2.0 is installed. Contact [mailto:openstack@mellanox.com?subject=MLNX_OFED2.0 openstack@mellanox.com] to retreive this version.<br />
Refer to Mellanox website for the latest OFED documentation ([http://www.mellanox.com/page/products_dyn?product_family=26 link])<br />
<br />
3. The software package python-zmq ([https://github.com/zeromq/pyzmq github]) must be installed.<br />
<br />
4. The software package iproute2 - ([http://www.linuxgrill.com/anonymous/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation]) must be installed.<br />
<br />
5. The software package ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code]) must be installed.<br />
<br />
6. RH 6.3 or above.<br />
<br />
7. Ubuntu 11.10 or above (Future).<br />
<br />
== Code Structure ==<br />
<br />
Mellanox Quantum Plugin and the Nova VIF driver are located at [http://github.com/mellanox-openstack Mellanox openStack]<br />
<br />
1. Quantum Plugin package structure:<br />
quantum/etc/quantum/plugins/mlnx -plugin configuration<br />
mlnx_conf.ini - sample plugin configuration<br />
<br />
quantum/quantum/plugins/mlnx - plugin code<br />
/agent - Agent code<br />
/common - common code<br />
/db - plugin persistency model and wrapping methods<br />
mlnx_plugin.py - Mellanox Openstack Plugin<br />
rpc_callbacks.py - RPC handler for received messages<br />
agent_notify_api.py - Agent RPC notify methods<br />
<br />
Mellanox Quantum Plugin is located under /quantum/quantum/plugins/.<br />
<br />
2. Nova VIF driver package structure is:<br />
nova/nova/mlnx - nova vif driver code<br />
<br />
Mellanox Nova VIF driver is located under /nova/virt/libvirt/.<br />
<br />
= Mellanox Quantum Plugin Installation =<br />
== On the Quantum Server Node ==<br />
1. Copy Mellanox OpenStack plugin to the installed quantum plugin directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins).<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins<br />
<br />
2. Modify the /etc/quantum/quantum.conf file.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
3. Copy the Mellanox plugin configuration. <br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.<br />
<br />
5. Run the server.<br />
#quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
or <br />
#/etc/init.d/quantum-server start<br />
<br />
== On Compute Nodes ==<br />
<br />
=== Nova-compute ===<br />
<br />
1. Copy the Nova Mellanox VIF driver.<br />
#cp -a mellanox-quantum-plugin/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt<br />
<br />
2. Modify nova.conf.<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
vnic_type=direct - can be either 'direct' or 'hostdev'<br />
fabric=default - specifies physical network for vNICs (currently support one fabric per node)<br />
<br />
3. Restart Nova.<br />
<br />
=== The eswitchd Daemon ===<br />
1. Copy daemon files.<br />
#cp -a daemon /opt/mlnx_daemon<br />
<br />
2. Copy the configuration file and modify it according to your environment.<br />
#mkdir /etc/mlnx_daemon<br />
#cp /opt/mlnx_daemon/etc/mlnx_daemon.conf /etc/mlnx_daemon<br />
<br />
3. Run the daemon:<br />
/opt/mlnx_daemon/eswitch_daemon.py<br />
<br />
=== Quantum Agent ===<br />
1. Copy Mellanox OpenStack agent.<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.6/site-packages/quantum/plugins<br />
<br />
2. Copy the quantum.conf and mlnx_conf.ini file to the compute node.<br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
3. Modify the Quantum Agent configuration at /etc/quantum/plugins/mlnx/mlnx_conf.ini.<br />
<br />
4. Run the agent.<br />
#python /usr/lib/python2.6/site-packages/quantum/plugins/mlnx/agent/eswitch_quantum_agent.py --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
= Mellanox Quantum Plugin Configuration =<br />
== Quantum Configuration ==<br />
1. Make the Mellanox plugin the current quantum plugin by editing quantum.conf and changing core_plugin.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
2. Database configuration: Install MySQL on the central server. Create a database named "quantum".<br />
<br />
3. Plugin configuration:<br />
Edit the configuration file: etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
On central server node<br />
<br />
[DATABASE]<br />
sql_connection - must match the mysql configuration<br />
<br />
[VLANS]<br />
tenant_network_type - must be set on of supported tenant network types<br />
network_vlan_ranges - must be configured to specify the names of the physical networks<br />
managed by the mellanox plugin, along with the ranges of VLAN IDs<br />
available on each physical network for allocation to virtual networks. <br />
<br />
On compute node(s)<br />
<br />
[AGENT]<br />
polling_interval - interfval to poll for existing vNICs<br />
rpc - must be set to True<br />
<br />
[ESWITCH]<br />
physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. <br />
<br />
For Plugin consfiguration file example, please refer to [http://github.com/mellanox-openstack/mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini Mellanox config *ini file]<br />
<br />
== Nova Configuration (compute node(s)) ==<br />
------------------------------------<br />
Edit the nova.conf file <br />
1. Configure the vif driver, and libvirt/vif type<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
connection_type=libvirt<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
2. Configure vnic_type ('direct' or 'hostdev')<br />
vnic_type= direct <br />
3. Define Embedded Switch managed physical network (currently single fabric on node)<br />
fabric=default - specifies physical network for vNICs<br />
4. Enable DHCP server to allow VMs to acquire IPs <br />
quantum_use_dhcp=true<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]<br />
<br />
2. [http://www.mellanox.com/openstack/ https://github.com/mellanox-openstack]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Quantum&diff=18510Mellanox-Quantum2013-02-28T15:23:08Z<p>Amir Naddaf: /* Quantum Agent */</p>
<hr />
<div><br />
<br />
= Overview = <br />
== Mellanox Quantum Plugin ==<br />
The Openstack Mellanox Quantum plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. <br />
Mellanox Quantum Plugin allows hardware vNICs (based on SR-IOV virtual functions) per each Virtual Machine vNIC to have its unique <br />
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).<br />
<br />
Hardware-based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.<br />
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality <br />
<br />
This plugin is implemented according to Plugin-Agent pattern.<br />
<br />
<br />
+-----------------+ +--------------+<br />
| Controller node | | Compute node |<br />
+-----------------------------------+ +-----------------------------------+<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
| | | | | | | | | zmq | | |<br />
| | Openstack | v2.0 | Mellanox | | RPC | | Mellanox |REQ/REP| Mellanox | |<br />
| | Quantum +------+ Quantum +-----------+ Quantum +-------+ Embedded | |<br />
| | | | Plugin | | | | Agent | | Switch | |<br />
| | | | | | | | | | (NIC) | |<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
+-----------------------------------+ +-----------------------------------+<br />
<br />
* Openstack Mellanox Quantum Plugin implements the Quantum v2.0 API.<br />
* Mellanox Quantum Plugin processes the Quantum API calls and manages network segmentation ID allocation. <br />
* The plugin uses databases to store configuration and allocation mapping.<br />
* The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.<br />
* Mellanox Openstack Quantum Agent (L2 Agent) runs on each compute node. <br />
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.<br />
<br />
== Mellanox Nova VIF Driver ==<br />
The Mellanox Nova VIF driver should be used when running Mellanox Quantum Plugin. This driver supports the VIF plugin by binding vNIC (para-virtualized or SR-IOV with optional RDMA guest access) to the embedded switch port.<br />
<br />
== Prerequisites ==<br />
The following are the Mellanox Quantum Plugin prerequisites:<br />
<br />
1. Compute nodes should be equiped with Mellanox ConnectX®-2/ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])<br />
<br />
2. Mellanox OFED 2.0 is installed. Contact [mailto:openstack@mellanox.com?subject=MLNX_OFED2.0 openstack@mellanox.com] to retreive this version.<br />
Refer to Mellanox website for the latest OFED documentation ([http://www.mellanox.com/page/products_dyn?product_family=26 link])<br />
<br />
3. The software package python-zmq ([https://github.com/zeromq/pyzmq github]) must be installed.<br />
<br />
4. The software package iproute2 - ([http://www.linuxgrill.com/anonymous/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation]) must be installed.<br />
<br />
5. The software package ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code]) must be installed.<br />
<br />
6. RH 6.3 or above.<br />
<br />
7. Ubuntu 11.10 or above (Future).<br />
<br />
== Code Structure ==<br />
<br />
Mellanox Quantum Plugin and the Nova VIF driver are located at [http://github.com/mellanox-openstack Mellanox openStack]<br />
<br />
1. Quantum Plugin package structure:<br />
quantum/etc/quantum/plugins/mlnx -plugin configuration<br />
mlnx_conf.ini - sample plugin configuration<br />
<br />
quantum/quantum/plugins/mlnx - plugin code<br />
/agent - Agent code<br />
/common - common code<br />
/db - plugin persistency model and wrapping methods<br />
mlnx_plugin.py - Mellanox Openstack Plugin<br />
rpc_callbacks.py - RPC handler for received messages<br />
agent_notify_api.py - Agent RPC notify methods<br />
<br />
Mellanox Quantum Plugin is located under /quantum/quantum/plugins/.<br />
<br />
2. Nova VIF driver package structure is:<br />
nova/nova/mlnx - nova vif driver code<br />
<br />
Mellanox Nova VIF driver is located under /nova/virt/libvirt/.<br />
<br />
= Mellanox Quantum Plugin Installation =<br />
== On the Quantum Server Node ==<br />
1. Copy Mellanox OpenStack plugin to the installed quantum plugin directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins).<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins<br />
<br />
2. Modify the /etc/quantum/quantum.conf file.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
3. Copy the Mellanox plugin configuration. <br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.<br />
<br />
5. Run the server.<br />
#quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
or <br />
#/etc/init.d/quantum-server start<br />
<br />
== On Compute Nodes ==<br />
<br />
=== Nova-compute ===<br />
<br />
1. Copy the Nova Mellanox VIF driver.<br />
#cp -a mellanox-quantum-plugin/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt<br />
<br />
2. Modify nova.conf.<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
vnic_type=direct - can be either 'direct' or 'hostdev'<br />
fabric=default - specifies physical network for vNICs (currently support one fabric per node)<br />
<br />
3. Restart Nova.<br />
<br />
=== The eswitchd Daemon ===<br />
1. Copy daemon files.<br />
#cp -a daemon /opt/mlnx_daemon<br />
<br />
2. Copy the configuration file and modify it according to your environment.<br />
#mkdir /etc/mlnx_daemon<br />
#cp /opt/mlnx_daemon/etc/mlnx_daemon.conf /etc/mlnx_daemon<br />
<br />
3. Run the daemon:<br />
/opt/mlnx_daemon/eswitch_daemon.py<br />
<br />
=== Quantum Agent ===<br />
1. Copy Mellanox OpenStack agent.<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.6/site-packages/quantum/plugins<br />
<br />
2. Copy the quantum.conf and mlnx_conf.ini file to the compute node.<br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
3. Modify the Quantum Agent configuration at /etc/quantum/plugins/mlnx/mlnx_conf.ini.<br />
<br />
4. Run the agent.<br />
#python /usr/lib/python2.6/site-packages/quantum/plugins/mlnx/agent/eswitch_quantum_agent.py --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
= Mellanox Quantum Plugin Configuration =<br />
== Quantum Configuration ==<br />
1. Make the Mellanox plugin the current quantum plugin by edit quantum.conf and change the core_plugin<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
2. Database configuration<br />
MySQL should be installed on the central server. A database named quantum should be created<br />
<br />
3. Plugin configuration<br />
Edit the configuration file: etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
On central server node<br />
<br />
[DATABASE]<br />
sql_connection - must match the mysql configuration<br />
<br />
[VLANS]<br />
tenant_network_type - must be set on of supported tenant network types<br />
network_vlan_ranges - must be configured to specify the names of the physical networks<br />
managed by the mellanox plugin, along with the ranges of VLAN IDs<br />
available on each physical network for allocation to virtual networks. <br />
<br />
On compute node(s)<br />
<br />
[AGENT]<br />
polling_interval - interfval to poll for existing vNICs<br />
rpc - must be set to True<br />
<br />
[ESWITCH]<br />
physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. <br />
<br />
For Plugin consfiguration file example, please refer to [http://github.com/mellanox-openstack/mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini Mellanox config *ini file]<br />
<br />
== Nova Configuration (compute node(s)) ==<br />
------------------------------------<br />
Edit the nova.conf file <br />
1. Configure the vif driver, and libvirt/vif type<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
connection_type=libvirt<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
2. Configure vnic_type ('direct' or 'hostdev')<br />
vnic_type= direct <br />
3. Define Embedded Switch managed physical network (currently single fabric on node)<br />
fabric=default - specifies physical network for vNICs<br />
4. Enable DHCP server to allow VMs to acquire IPs <br />
quantum_use_dhcp=true<br />
<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]<br />
<br />
2. [http://www.mellanox.com/openstack/ https://github.com/mellanox-openstack]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Quantum&diff=18509Mellanox-Quantum2013-02-28T15:22:34Z<p>Amir Naddaf: /* Mellanox Quantum Plugin Installation */</p>
<hr />
<div><br />
<br />
= Overview = <br />
== Mellanox Quantum Plugin ==<br />
The Openstack Mellanox Quantum plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. <br />
Mellanox Quantum Plugin allows hardware vNICs (based on SR-IOV virtual functions) per each Virtual Machine vNIC to have its unique <br />
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).<br />
<br />
Hardware-based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.<br />
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality <br />
<br />
This plugin is implemented according to Plugin-Agent pattern.<br />
<br />
<br />
+-----------------+ +--------------+<br />
| Controller node | | Compute node |<br />
+-----------------------------------+ +-----------------------------------+<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
| | | | | | | | | zmq | | |<br />
| | Openstack | v2.0 | Mellanox | | RPC | | Mellanox |REQ/REP| Mellanox | |<br />
| | Quantum +------+ Quantum +-----------+ Quantum +-------+ Embedded | |<br />
| | | | Plugin | | | | Agent | | Switch | |<br />
| | | | | | | | | | (NIC) | |<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
+-----------------------------------+ +-----------------------------------+<br />
<br />
* Openstack Mellanox Quantum Plugin implements the Quantum v2.0 API.<br />
* Mellanox Quantum Plugin processes the Quantum API calls and manages network segmentation ID allocation. <br />
* The plugin uses databases to store configuration and allocation mapping.<br />
* The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.<br />
* Mellanox Openstack Quantum Agent (L2 Agent) runs on each compute node. <br />
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.<br />
<br />
== Mellanox Nova VIF Driver ==<br />
The Mellanox Nova VIF driver should be used when running Mellanox Quantum Plugin. This driver supports the VIF plugin by binding vNIC (para-virtualized or SR-IOV with optional RDMA guest access) to the embedded switch port.<br />
<br />
== Prerequisites ==<br />
The following are the Mellanox Quantum Plugin prerequisites:<br />
<br />
1. Compute nodes should be equiped with Mellanox ConnectX®-2/ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])<br />
<br />
2. Mellanox OFED 2.0 is installed. Contact [mailto:openstack@mellanox.com?subject=MLNX_OFED2.0 openstack@mellanox.com] to retreive this version.<br />
Refer to Mellanox website for the latest OFED documentation ([http://www.mellanox.com/page/products_dyn?product_family=26 link])<br />
<br />
3. The software package python-zmq ([https://github.com/zeromq/pyzmq github]) must be installed.<br />
<br />
4. The software package iproute2 - ([http://www.linuxgrill.com/anonymous/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation]) must be installed.<br />
<br />
5. The software package ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code]) must be installed.<br />
<br />
6. RH 6.3 or above.<br />
<br />
7. Ubuntu 11.10 or above (Future).<br />
<br />
== Code Structure ==<br />
<br />
Mellanox Quantum Plugin and the Nova VIF driver are located at [http://github.com/mellanox-openstack Mellanox openStack]<br />
<br />
1. Quantum Plugin package structure:<br />
quantum/etc/quantum/plugins/mlnx -plugin configuration<br />
mlnx_conf.ini - sample plugin configuration<br />
<br />
quantum/quantum/plugins/mlnx - plugin code<br />
/agent - Agent code<br />
/common - common code<br />
/db - plugin persistency model and wrapping methods<br />
mlnx_plugin.py - Mellanox Openstack Plugin<br />
rpc_callbacks.py - RPC handler for received messages<br />
agent_notify_api.py - Agent RPC notify methods<br />
<br />
Mellanox Quantum Plugin is located under /quantum/quantum/plugins/.<br />
<br />
2. Nova VIF driver package structure is:<br />
nova/nova/mlnx - nova vif driver code<br />
<br />
Mellanox Nova VIF driver is located under /nova/virt/libvirt/.<br />
<br />
= Mellanox Quantum Plugin Installation =<br />
== On the Quantum Server Node ==<br />
1. Copy Mellanox OpenStack plugin to the installed quantum plugin directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins).<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins<br />
<br />
2. Modify the /etc/quantum/quantum.conf file.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
3. Copy the Mellanox plugin configuration. <br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.<br />
<br />
5. Run the server.<br />
#quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
or <br />
#/etc/init.d/quantum-server start<br />
<br />
== On Compute Nodes ==<br />
<br />
=== Nova-compute ===<br />
<br />
1. Copy the Nova Mellanox VIF driver.<br />
#cp -a mellanox-quantum-plugin/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt<br />
<br />
2. Modify nova.conf.<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
vnic_type=direct - can be either 'direct' or 'hostdev'<br />
fabric=default - specifies physical network for vNICs (currently support one fabric per node)<br />
<br />
3. Restart Nova.<br />
<br />
=== The eswitchd Daemon ===<br />
1. Copy daemon files.<br />
#cp -a daemon /opt/mlnx_daemon<br />
<br />
2. Copy the configuration file and modify it according to your environment.<br />
#mkdir /etc/mlnx_daemon<br />
#cp /opt/mlnx_daemon/etc/mlnx_daemon.conf /etc/mlnx_daemon<br />
<br />
3. Run the daemon:<br />
/opt/mlnx_daemon/eswitch_daemon.py<br />
<br />
=== Quantum Agent ===<br />
1. Copy Mellanox OpenStack agent<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.6/site-packages/quantum/plugins<br />
<br />
2. Copy the quantum.conf and mlnx_conf.ini file to the compute node<br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
3. Modify the Quantum Agent configuration at /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
4. Run the agent<br />
#python /usr/lib/python2.6/site-packages/quantum/plugins/mlnx/agent/eswitch_quantum_agent.py --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
= Mellanox Quantum Plugin Configuration =<br />
== Quantum Configuration ==<br />
1. Make the Mellanox plugin the current quantum plugin by edit quantum.conf and change the core_plugin<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
2. Database configuration<br />
MySQL should be installed on the central server. A database named quantum should be created<br />
<br />
3. Plugin configuration<br />
Edit the configuration file: etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
On central server node<br />
<br />
[DATABASE]<br />
sql_connection - must match the mysql configuration<br />
<br />
[VLANS]<br />
tenant_network_type - must be set on of supported tenant network types<br />
network_vlan_ranges - must be configured to specify the names of the physical networks<br />
managed by the mellanox plugin, along with the ranges of VLAN IDs<br />
available on each physical network for allocation to virtual networks. <br />
<br />
On compute node(s)<br />
<br />
[AGENT]<br />
polling_interval - interfval to poll for existing vNICs<br />
rpc - must be set to True<br />
<br />
[ESWITCH]<br />
physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. <br />
<br />
For Plugin consfiguration file example, please refer to [http://github.com/mellanox-openstack/mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini Mellanox config *ini file]<br />
<br />
== Nova Configuration (compute node(s)) ==<br />
------------------------------------<br />
Edit the nova.conf file <br />
1. Configure the vif driver, and libvirt/vif type<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
connection_type=libvirt<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
2. Configure vnic_type ('direct' or 'hostdev')<br />
vnic_type= direct <br />
3. Define Embedded Switch managed physical network (currently single fabric on node)<br />
fabric=default - specifies physical network for vNICs<br />
4. Enable DHCP server to allow VMs to acquire IPs <br />
quantum_use_dhcp=true<br />
<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]<br />
<br />
2. [http://www.mellanox.com/openstack/ https://github.com/mellanox-openstack]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Quantum&diff=18508Mellanox-Quantum2013-02-28T15:03:19Z<p>Amir Naddaf: /* Mellanox Quantum Plugin Installation */</p>
<hr />
<div><br />
<br />
= Overview = <br />
== Mellanox Quantum Plugin ==<br />
The Openstack Mellanox Quantum plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. <br />
Mellanox Quantum Plugin allows hardware vNICs (based on SR-IOV virtual functions) per each Virtual Machine vNIC to have its unique <br />
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).<br />
<br />
Hardware-based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.<br />
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality <br />
<br />
This plugin is implemented according to Plugin-Agent pattern.<br />
<br />
<br />
+-----------------+ +--------------+<br />
| Controller node | | Compute node |<br />
+-----------------------------------+ +-----------------------------------+<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
| | | | | | | | | zmq | | |<br />
| | Openstack | v2.0 | Mellanox | | RPC | | Mellanox |REQ/REP| Mellanox | |<br />
| | Quantum +------+ Quantum +-----------+ Quantum +-------+ Embedded | |<br />
| | | | Plugin | | | | Agent | | Switch | |<br />
| | | | | | | | | | (NIC) | |<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
+-----------------------------------+ +-----------------------------------+<br />
<br />
* Openstack Mellanox Quantum Plugin implements the Quantum v2.0 API.<br />
* Mellanox Quantum Plugin processes the Quantum API calls and manages network segmentation ID allocation. <br />
* The plugin uses databases to store configuration and allocation mapping.<br />
* The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.<br />
* Mellanox Openstack Quantum Agent (L2 Agent) runs on each compute node. <br />
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.<br />
<br />
== Mellanox Nova VIF Driver ==<br />
The Mellanox Nova VIF driver should be used when running Mellanox Quantum Plugin. This driver supports the VIF plugin by binding vNIC (para-virtualized or SR-IOV with optional RDMA guest access) to the embedded switch port.<br />
<br />
== Prerequisites ==<br />
The following are the Mellanox Quantum Plugin prerequisites:<br />
<br />
1. Compute nodes should be equiped with Mellanox ConnectX®-2/ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])<br />
<br />
2. Mellanox OFED 2.0 is installed. Contact [mailto:openstack@mellanox.com?subject=MLNX_OFED2.0 openstack@mellanox.com] to retreive this version.<br />
Refer to Mellanox website for the latest OFED documentation ([http://www.mellanox.com/page/products_dyn?product_family=26 link])<br />
<br />
3. The software package python-zmq ([https://github.com/zeromq/pyzmq github]) must be installed.<br />
<br />
4. The software package iproute2 - ([http://www.linuxgrill.com/anonymous/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation]) must be installed.<br />
<br />
5. The software package ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code]) must be installed.<br />
<br />
6. RH 6.3 or above.<br />
<br />
7. Ubuntu 11.10 or above (Future).<br />
<br />
== Code Structure ==<br />
<br />
Mellanox Quantum Plugin and the Nova VIF driver are located at [http://github.com/mellanox-openstack Mellanox openStack]<br />
<br />
1. Quantum Plugin package structure:<br />
quantum/etc/quantum/plugins/mlnx -plugin configuration<br />
mlnx_conf.ini - sample plugin configuration<br />
<br />
quantum/quantum/plugins/mlnx - plugin code<br />
/agent - Agent code<br />
/common - common code<br />
/db - plugin persistency model and wrapping methods<br />
mlnx_plugin.py - Mellanox Openstack Plugin<br />
rpc_callbacks.py - RPC handler for received messages<br />
agent_notify_api.py - Agent RPC notify methods<br />
<br />
Mellanox Quantum Plugin is located under /quantum/quantum/plugins/.<br />
<br />
2. Nova VIF driver package structure is:<br />
nova/nova/mlnx - nova vif driver code<br />
<br />
Mellanox Nova VIF driver is located under /nova/virt/libvirt/.<br />
<br />
= Mellanox Quantum Plugin Installation =<br />
== On the Quantum Server Node ==<br />
1. Copy Mellanox OpenStack plugin to installed quantum plugins directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins)<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins<br />
<br />
2. Modify the /etc/quantum/quantum.conf file.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
3. Copy the Mellanox plugin configuration. <br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.<br />
<br />
5. Run the server <br />
#quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
or <br />
#/etc/init.d/quantum-server start<br />
<br />
== On Compute Nodes ==<br />
<br />
=== Nova-compute ===<br />
<br />
1. Copy the nova Mellanox vifDriver<br />
#cp -a mellanox-quantum-plugin/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt<br />
<br />
2. Modify nova.conf<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
vnic_type=direct - can be either 'direct' or 'hostdev'<br />
fabric=default - specifies physical network for vNICs (currently support one fabric per node)<br />
<br />
3. Restart nova<br />
<br />
=== The eswitchd daemon ===<br />
1. Copy daemon files<br />
#cp -a daemon /opt/mlnx_daemon<br />
<br />
2. Copy the configuration file and modify it according to your environment<br />
#mkdir /etc/mlnx_daemon<br />
#cp /opt/mlnx_daemon/etc/mlnx_daemon.conf /etc/mlnx_daemon<br />
3. Run the daemon:<br />
/opt/mlnx_daemon/eswitch_daemon.py<br />
<br />
=== Quantum Agent ===<br />
1. Copy Mellanox OpenStack agent<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.6/site-packages/quantum/plugins<br />
<br />
2. Copy the quantum.conf and mlnx_conf.ini file to the compute node<br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
3. Modify the Quantum Agent configuration at /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
4. Run the agent<br />
#python /usr/lib/python2.6/site-packages/quantum/plugins/mlnx/agent/eswitch_quantum_agent.py --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
= Mellanox Quantum Plugin Configuration =<br />
== Quantum Configuration ==<br />
1. Make the Mellanox plugin the current quantum plugin by edit quantum.conf and change the core_plugin<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
2. Database configuration<br />
MySQL should be installed on the central server. A database named quantum should be created<br />
<br />
3. Plugin configuration<br />
Edit the configuration file: etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
On central server node<br />
<br />
[DATABASE]<br />
sql_connection - must match the mysql configuration<br />
<br />
[VLANS]<br />
tenant_network_type - must be set on of supported tenant network types<br />
network_vlan_ranges - must be configured to specify the names of the physical networks<br />
managed by the mellanox plugin, along with the ranges of VLAN IDs<br />
available on each physical network for allocation to virtual networks. <br />
<br />
On compute node(s)<br />
<br />
[AGENT]<br />
polling_interval - interfval to poll for existing vNICs<br />
rpc - must be set to True<br />
<br />
[ESWITCH]<br />
physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. <br />
<br />
For Plugin consfiguration file example, please refer to [http://github.com/mellanox-openstack/mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini Mellanox config *ini file]<br />
<br />
== Nova Configuration (compute node(s)) ==<br />
------------------------------------<br />
Edit the nova.conf file <br />
1. Configure the vif driver, and libvirt/vif type<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
connection_type=libvirt<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
2. Configure vnic_type ('direct' or 'hostdev')<br />
vnic_type= direct <br />
3. Define Embedded Switch managed physical network (currently single fabric on node)<br />
fabric=default - specifies physical network for vNICs<br />
4. Enable DHCP server to allow VMs to acquire IPs <br />
quantum_use_dhcp=true<br />
<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]<br />
<br />
2. [http://www.mellanox.com/openstack/ https://github.com/mellanox-openstack]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Quantum&diff=18507Mellanox-Quantum2013-02-28T12:39:38Z<p>Amir Naddaf: /* Code Structure */</p>
<hr />
<div><br />
<br />
= Overview = <br />
== Mellanox Quantum Plugin ==<br />
The Openstack Mellanox Quantum plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. <br />
Mellanox Quantum Plugin allows hardware vNICs (based on SR-IOV virtual functions) per each Virtual Machine vNIC to have its unique <br />
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).<br />
<br />
Hardware-based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.<br />
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality <br />
<br />
This plugin is implemented according to Plugin-Agent pattern.<br />
<br />
<br />
+-----------------+ +--------------+<br />
| Controller node | | Compute node |<br />
+-----------------------------------+ +-----------------------------------+<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
| | | | | | | | | zmq | | |<br />
| | Openstack | v2.0 | Mellanox | | RPC | | Mellanox |REQ/REP| Mellanox | |<br />
| | Quantum +------+ Quantum +-----------+ Quantum +-------+ Embedded | |<br />
| | | | Plugin | | | | Agent | | Switch | |<br />
| | | | | | | | | | (NIC) | |<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
+-----------------------------------+ +-----------------------------------+<br />
<br />
* Openstack Mellanox Quantum Plugin implements the Quantum v2.0 API.<br />
* Mellanox Quantum Plugin processes the Quantum API calls and manages network segmentation ID allocation. <br />
* The plugin uses databases to store configuration and allocation mapping.<br />
* The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.<br />
* Mellanox Openstack Quantum Agent (L2 Agent) runs on each compute node. <br />
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.<br />
<br />
== Mellanox Nova VIF Driver ==<br />
The Mellanox Nova VIF driver should be used when running Mellanox Quantum Plugin. This driver supports the VIF plugin by binding vNIC (para-virtualized or SR-IOV with optional RDMA guest access) to the embedded switch port.<br />
<br />
== Prerequisites ==<br />
The following are the Mellanox Quantum Plugin prerequisites:<br />
<br />
1. Compute nodes should be equiped with Mellanox ConnectX®-2/ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])<br />
<br />
2. Mellanox OFED 2.0 is installed. Contact [mailto:openstack@mellanox.com?subject=MLNX_OFED2.0 openstack@mellanox.com] to retreive this version.<br />
Refer to Mellanox website for the latest OFED documentation ([http://www.mellanox.com/page/products_dyn?product_family=26 link])<br />
<br />
3. The software package python-zmq ([https://github.com/zeromq/pyzmq github]) must be installed.<br />
<br />
4. The software package iproute2 - ([http://www.linuxgrill.com/anonymous/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation]) must be installed.<br />
<br />
5. The software package ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code]) must be installed.<br />
<br />
6. RH 6.3 or above.<br />
<br />
7. Ubuntu 11.10 or above (Future).<br />
<br />
== Code Structure ==<br />
<br />
Mellanox Quantum Plugin and the Nova VIF driver are located at [http://github.com/mellanox-openstack Mellanox openStack]<br />
<br />
1. Quantum Plugin package structure:<br />
quantum/etc/quantum/plugins/mlnx -plugin configuration<br />
mlnx_conf.ini - sample plugin configuration<br />
<br />
quantum/quantum/plugins/mlnx - plugin code<br />
/agent - Agent code<br />
/common - common code<br />
/db - plugin persistency model and wrapping methods<br />
mlnx_plugin.py - Mellanox Openstack Plugin<br />
rpc_callbacks.py - RPC handler for received messages<br />
agent_notify_api.py - Agent RPC notify methods<br />
<br />
Mellanox Quantum Plugin is located under /quantum/quantum/plugins/.<br />
<br />
2. Nova VIF driver package structure is:<br />
nova/nova/mlnx - nova vif driver code<br />
<br />
Mellanox Nova VIF driver is located under /nova/virt/libvirt/.<br />
<br />
= Mellanox Quantum Plugin Installation =<br />
== On the Quantum Server Node ==<br />
1. Copy Mellanox openstack plugin to installed quantum plugins directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins)<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins<br />
<br />
2. Modify the /etc/quantum/quantum.conf file.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
3. Copy the Mellanox plugin configuration. <br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.<br />
<br />
5. Run the server <br />
#quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
or <br />
#/etc/init.d/quantum-server start<br />
<br />
== On Compute Nodes ==<br />
<br />
=== Nova-compute ===<br />
<br />
1. Copy the nova Mellanox vifDriver<br />
#cp -a mellanox-quantum-plugin/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt<br />
<br />
2. Modify nova.conf<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
vnic_type=direct - can be either 'direct' or 'hostdev'<br />
fabric=default - specifies physical network for vNICs (currently support one fabric per node)<br />
<br />
3. Restart nova<br />
<br />
=== The eswitchd daemon ===<br />
1. Copy daemon files<br />
#cp -a daemon /opt/mlnx_daemon<br />
<br />
2. Copy the configuration file and modify it according to your environment<br />
#mkdir /etc/mlnx_daemon<br />
#cp /opt/mlnx_daemon/etc/mlnx_daemon.conf /etc/mlnx_daemon<br />
3. Run the daemon:<br />
/opt/mlnx_daemon/eswitch_daemon.py<br />
<br />
=== Quantum Agent ===<br />
1. Copy Mellanox openstack agent<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.6/site-packages/quantum/plugins<br />
<br />
2. Copy the quantum.conf and mlnx_conf.ini file to the compute node<br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
3. Modify the Quantum Agent configuration at /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
4. Run the agent<br />
#python /usr/lib/python2.6/site-packages/quantum/plugins/mlnx/agent/eswitch_quantum_agent.py --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
= Mellanox Quantum Plugin Configuration =<br />
== Quantum Configuration ==<br />
1. Make the Mellanox plugin the current quantum plugin by edit quantum.conf and change the core_plugin<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
2. Database configuration<br />
MySQL should be installed on the central server. A database named quantum should be created<br />
<br />
3. Plugin configuration<br />
Edit the configuration file: etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
On central server node<br />
<br />
[DATABASE]<br />
sql_connection - must match the mysql configuration<br />
<br />
[VLANS]<br />
tenant_network_type - must be set on of supported tenant network types<br />
network_vlan_ranges - must be configured to specify the names of the physical networks<br />
managed by the mellanox plugin, along with the ranges of VLAN IDs<br />
available on each physical network for allocation to virtual networks. <br />
<br />
On compute node(s)<br />
<br />
[AGENT]<br />
polling_interval - interfval to poll for existing vNICs<br />
rpc - must be set to True<br />
<br />
[ESWITCH]<br />
physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. <br />
<br />
For Plugin consfiguration file example, please refer to [http://github.com/mellanox-openstack/mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini Mellanox config *ini file]<br />
<br />
== Nova Configuration (compute node(s)) ==<br />
------------------------------------<br />
Edit the nova.conf file <br />
1. Configure the vif driver, and libvirt/vif type<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
connection_type=libvirt<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
2. Configure vnic_type ('direct' or 'hostdev')<br />
vnic_type= direct <br />
3. Define Embedded Switch managed physical network (currently single fabric on node)<br />
fabric=default - specifies physical network for vNICs<br />
4. Enable DHCP server to allow VMs to acquire IPs <br />
quantum_use_dhcp=true<br />
<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]<br />
<br />
2. [http://www.mellanox.com/openstack/ https://github.com/mellanox-openstack]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Quantum&diff=18506Mellanox-Quantum2013-02-28T12:36:08Z<p>Amir Naddaf: /* Code Structure */</p>
<hr />
<div><br />
<br />
= Overview = <br />
== Mellanox Quantum Plugin ==<br />
The Openstack Mellanox Quantum plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. <br />
Mellanox Quantum Plugin allows hardware vNICs (based on SR-IOV virtual functions) per each Virtual Machine vNIC to have its unique <br />
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).<br />
<br />
Hardware-based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.<br />
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality <br />
<br />
This plugin is implemented according to Plugin-Agent pattern.<br />
<br />
<br />
+-----------------+ +--------------+<br />
| Controller node | | Compute node |<br />
+-----------------------------------+ +-----------------------------------+<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
| | | | | | | | | zmq | | |<br />
| | Openstack | v2.0 | Mellanox | | RPC | | Mellanox |REQ/REP| Mellanox | |<br />
| | Quantum +------+ Quantum +-----------+ Quantum +-------+ Embedded | |<br />
| | | | Plugin | | | | Agent | | Switch | |<br />
| | | | | | | | | | (NIC) | |<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
+-----------------------------------+ +-----------------------------------+<br />
<br />
* Openstack Mellanox Quantum Plugin implements the Quantum v2.0 API.<br />
* Mellanox Quantum Plugin processes the Quantum API calls and manages network segmentation ID allocation. <br />
* The plugin uses databases to store configuration and allocation mapping.<br />
* The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.<br />
* Mellanox Openstack Quantum Agent (L2 Agent) runs on each compute node. <br />
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.<br />
<br />
== Mellanox Nova VIF Driver ==<br />
The Mellanox Nova VIF driver should be used when running Mellanox Quantum Plugin. This driver supports the VIF plugin by binding vNIC (para-virtualized or SR-IOV with optional RDMA guest access) to the embedded switch port.<br />
<br />
== Prerequisites ==<br />
The following are the Mellanox Quantum Plugin prerequisites:<br />
<br />
1. Compute nodes should be equiped with Mellanox ConnectX®-2/ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])<br />
<br />
2. Mellanox OFED 2.0 is installed. Contact [mailto:openstack@mellanox.com?subject=MLNX_OFED2.0 openstack@mellanox.com] to retreive this version.<br />
Refer to Mellanox website for the latest OFED documentation ([http://www.mellanox.com/page/products_dyn?product_family=26 link])<br />
<br />
3. The software package python-zmq ([https://github.com/zeromq/pyzmq github]) must be installed.<br />
<br />
4. The software package iproute2 - ([http://www.linuxgrill.com/anonymous/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation]) must be installed.<br />
<br />
5. The software package ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code]) must be installed.<br />
<br />
6. RH 6.3 or above.<br />
<br />
7. Ubuntu 11.10 or above (Future).<br />
<br />
== Code Structure ==<br />
<br />
Mellanox Quantum Plugin and the Nova VIF driver are located at [http://github.com/mellanox-openstack Mellanox openStack]<br />
<br />
1. Quantum Plugin package structure:<br />
quantum/etc/quantum/plugins/mlnx -plugin configuration<br />
mlnx_conf.ini - sample plugin configuration<br />
<br />
quantum/quantum/plugins/mlnx - plugin code<br />
/agent - Agent code<br />
/common - common code<br />
/db - plugin persistency model and wrapping methods<br />
mlnx_plugin.py - Mellanox Openstack Plugin<br />
rpc_callbacks.py - RPC handler for received messages<br />
agent_notify_api.py - Agent RPC notify methods<br />
<br />
Mellanox Quantum Plugin is located under /quantum/quantum/plugins/.<br />
<br />
2. Nova VIF Driver package structure is:<br />
nova/nova/mlnx - nova vif driver code<br />
<br />
Mellanox Nova VIF driver is located under /nova/virt/libvirt/.<br />
<br />
= Mellanox Quantum Plugin Installation =<br />
== On the Quantum Server Node ==<br />
1. Copy Mellanox openstack plugin to installed quantum plugins directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins)<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins<br />
<br />
2. Modify the /etc/quantum/quantum.conf file.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
3. Copy the Mellanox plugin configuration. <br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.<br />
<br />
5. Run the server <br />
#quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
or <br />
#/etc/init.d/quantum-server start<br />
<br />
== On Compute Nodes ==<br />
<br />
=== Nova-compute ===<br />
<br />
1. Copy the nova Mellanox vifDriver<br />
#cp -a mellanox-quantum-plugin/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt<br />
<br />
2. Modify nova.conf<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
vnic_type=direct - can be either 'direct' or 'hostdev'<br />
fabric=default - specifies physical network for vNICs (currently support one fabric per node)<br />
<br />
3. Restart nova<br />
<br />
=== The eswitchd daemon ===<br />
1. Copy daemon files<br />
#cp -a daemon /opt/mlnx_daemon<br />
<br />
2. Copy the configuration file and modify it according to your environment<br />
#mkdir /etc/mlnx_daemon<br />
#cp /opt/mlnx_daemon/etc/mlnx_daemon.conf /etc/mlnx_daemon<br />
3. Run the daemon:<br />
/opt/mlnx_daemon/eswitch_daemon.py<br />
<br />
=== Quantum Agent ===<br />
1. Copy Mellanox openstack agent<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.6/site-packages/quantum/plugins<br />
<br />
2. Copy the quantum.conf and mlnx_conf.ini file to the compute node<br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
3. Modify the Quantum Agent configuration at /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
4. Run the agent<br />
#python /usr/lib/python2.6/site-packages/quantum/plugins/mlnx/agent/eswitch_quantum_agent.py --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
= Mellanox Quantum Plugin Configuration =<br />
== Quantum Configuration ==<br />
1. Make the Mellanox plugin the current quantum plugin by edit quantum.conf and change the core_plugin<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
2. Database configuration<br />
MySQL should be installed on the central server. A database named quantum should be created<br />
<br />
3. Plugin configuration<br />
Edit the configuration file: etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
On central server node<br />
<br />
[DATABASE]<br />
sql_connection - must match the mysql configuration<br />
<br />
[VLANS]<br />
tenant_network_type - must be set on of supported tenant network types<br />
network_vlan_ranges - must be configured to specify the names of the physical networks<br />
managed by the mellanox plugin, along with the ranges of VLAN IDs<br />
available on each physical network for allocation to virtual networks. <br />
<br />
On compute node(s)<br />
<br />
[AGENT]<br />
polling_interval - interfval to poll for existing vNICs<br />
rpc - must be set to True<br />
<br />
[ESWITCH]<br />
physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. <br />
<br />
For Plugin consfiguration file example, please refer to [http://github.com/mellanox-openstack/mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini Mellanox config *ini file]<br />
<br />
== Nova Configuration (compute node(s)) ==<br />
------------------------------------<br />
Edit the nova.conf file <br />
1. Configure the vif driver, and libvirt/vif type<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
connection_type=libvirt<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
2. Configure vnic_type ('direct' or 'hostdev')<br />
vnic_type= direct <br />
3. Define Embedded Switch managed physical network (currently single fabric on node)<br />
fabric=default - specifies physical network for vNICs<br />
4. Enable DHCP server to allow VMs to acquire IPs <br />
quantum_use_dhcp=true<br />
<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]<br />
<br />
2. [http://www.mellanox.com/openstack/ https://github.com/mellanox-openstack]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Quantum&diff=18501Mellanox-Quantum2013-02-28T08:54:47Z<p>Amir Naddaf: /* Mellanox Nova VIF Driver */</p>
<hr />
<div><br />
<br />
= Overview = <br />
== Mellanox Quantum Plugin ==<br />
The Openstack Mellanox Quantum plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. <br />
Mellanox Quantum Plugin allows hardware vNICs (based on SR-IOV virtual functions) per each Virtual Machine vNIC to have its unique <br />
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).<br />
<br />
Hardware-based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.<br />
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality <br />
<br />
This plugin is implemented according to Plugin-Agent pattern.<br />
<br />
<br />
+-----------------+ +--------------+<br />
| Controller node | | Compute node |<br />
+-----------------------------------+ +-----------------------------------+<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
| | | | | | | | | zmq | | |<br />
| | Openstack | v2.0 | Mellanox | | RPC | | Mellanox |REQ/REP| Mellanox | |<br />
| | Quantum +------+ Quantum +-----------+ Quantum +-------+ Embedded | |<br />
| | | | Plugin | | | | Agent | | Switch | |<br />
| | | | | | | | | | (NIC) | |<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
+-----------------------------------+ +-----------------------------------+<br />
<br />
* Openstack Mellanox Quantum Plugin implements the Quantum v2.0 API.<br />
* Mellanox Quantum Plugin processes the Quantum API calls and manages network segmentation ID allocation. <br />
* The plugin uses databases to store configuration and allocation mapping.<br />
* The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.<br />
* Mellanox Openstack Quantum Agent (L2 Agent) runs on each compute node. <br />
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.<br />
<br />
== Mellanox Nova VIF Driver ==<br />
The Mellanox Nova VIF driver should be used when running Mellanox Quantum Plugin. This driver supports the VIF plugin by binding vNIC (para-virtualized or SR-IOV with optional RDMA guest access) to the embedded switch port.<br />
<br />
== Prerequisites ==<br />
The following are the Mellanox Quantum Plugin prerequisites:<br />
<br />
1. Compute nodes should be equiped with Mellanox ConnectX®-2/ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])<br />
<br />
2. Mellanox OFED 2.0 is installed. Contact [mailto:openstack@mellanox.com?subject=MLNX_OFED2.0 openstack@mellanox.com] to retreive this version.<br />
Refer to Mellanox website for the latest OFED documentation ([http://www.mellanox.com/page/products_dyn?product_family=26 link])<br />
<br />
3. The software package python-zmq ([https://github.com/zeromq/pyzmq github]) must be installed.<br />
<br />
4. The software package iproute2 - ([http://www.linuxgrill.com/anonymous/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation]) must be installed.<br />
<br />
5. The software package ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code]) must be installed.<br />
<br />
6. RH 6.3 or above.<br />
<br />
7. Ubuntu 11.10 or above (Future).<br />
<br />
== Code Structure ==<br />
<br />
Mellanox Quantum Plugin and supporting nova VIF driver are located at [http://github.com/mellanox-openstack Mellanox openStack]<br />
<br />
1. Quantum Plugin package structure:<br />
quantum/etc/quantum/plugins/mlnx -plugin configuration<br />
mlnx_conf.ini - sample plugin configuration<br />
<br />
quantum/quantum/plugins/mlnx - plugin code<br />
/agent - Agent code<br />
/common - common code<br />
/db - plugin persistency model and wrapping methods<br />
mlnx_plugin.py - Mellanox Openstack Plugin<br />
rpc_callbacks.py - RPC handler for received messages<br />
agent_notify_api.py - Agent RPC notify methods<br />
<br />
Mellanox Quantum Plugin is located under /quantum/quantum/plugins/ <br />
<br />
2. Nova VIF Driver package structure is:<br />
nova/nova/mlnx - nova vif driver code<br />
<br />
Mellanox nova VIF driver is located under /nova/virt/libvirt/<br />
<br />
= Mellanox Quantum Plugin Installation =<br />
== On the Quantum Server Node ==<br />
1. Copy Mellanox openstack plugin to installed quantum plugins directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins)<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins<br />
<br />
2. Modify the /etc/quantum/quantum.conf file.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
3. Copy the Mellanox plugin configuration. <br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.<br />
<br />
5. Run the server <br />
#quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
or <br />
#/etc/init.d/quantum-server start<br />
<br />
== On Compute Nodes ==<br />
<br />
=== Nova-compute ===<br />
<br />
1. Copy the nova Mellanox vifDriver<br />
#cp -a mellanox-quantum-plugin/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt<br />
<br />
2. Modify nova.conf<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
vnic_type=direct - can be either 'direct' or 'hostdev'<br />
fabric=default - specifies physical network for vNICs (currently support one fabric per node)<br />
<br />
3. Restart nova<br />
<br />
=== The eswitchd daemon ===<br />
1. Copy daemon files<br />
#cp -a daemon /opt/mlnx_daemon<br />
<br />
2. Copy the configuration file and modify it according to your environment<br />
#mkdir /etc/mlnx_daemon<br />
#cp /opt/mlnx_daemon/etc/mlnx_daemon.conf /etc/mlnx_daemon<br />
3. Run the daemon:<br />
/opt/mlnx_daemon/eswitch_daemon.py<br />
<br />
=== Quantum Agent ===<br />
1. Copy Mellanox openstack agent<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.6/site-packages/quantum/plugins<br />
<br />
2. Copy the quantum.conf and mlnx_conf.ini file to the compute node<br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
3. Modify the Quantum Agent configuration at /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
4. Run the agent<br />
#python /usr/lib/python2.6/site-packages/quantum/plugins/mlnx/agent/eswitch_quantum_agent.py --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
= Mellanox Quantum Plugin Configuration =<br />
== Quantum Configuration ==<br />
1. Make the Mellanox plugin the current quantum plugin by edit quantum.conf and change the core_plugin<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
2. Database configuration<br />
MySQL should be installed on the central server. A database named quantum should be created<br />
<br />
3. Plugin configuration<br />
Edit the configuration file: etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
On central server node<br />
<br />
[DATABASE]<br />
sql_connection - must match the mysql configuration<br />
<br />
[VLANS]<br />
tenant_network_type - must be set on of supported tenant network types<br />
network_vlan_ranges - must be configured to specify the names of the physical networks<br />
managed by the mellanox plugin, along with the ranges of VLAN IDs<br />
available on each physical network for allocation to virtual networks. <br />
<br />
On compute node(s)<br />
<br />
[AGENT]<br />
polling_interval - interfval to poll for existing vNICs<br />
rpc - must be set to True<br />
<br />
[ESWITCH]<br />
physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. <br />
<br />
For Plugin consfiguration file example, please refer to [http://github.com/mellanox-openstack/mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini Mellanox config *ini file]<br />
<br />
== Nova Configuration (compute node(s)) ==<br />
------------------------------------<br />
Edit the nova.conf file <br />
1. Configure the vif driver, and libvirt/vif type<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
connection_type=libvirt<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
2. Configure vnic_type ('direct' or 'hostdev')<br />
vnic_type= direct <br />
3. Define Embedded Switch managed physical network (currently single fabric on node)<br />
fabric=default - specifies physical network for vNICs<br />
4. Enable DHCP server to allow VMs to acquire IPs <br />
quantum_use_dhcp=true<br />
<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]<br />
<br />
2. [http://www.mellanox.com/openstack/ https://github.com/mellanox-openstack]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Quantum&diff=18500Mellanox-Quantum2013-02-28T08:41:50Z<p>Amir Naddaf: /* Prerequisite */</p>
<hr />
<div><br />
<br />
= Overview = <br />
== Mellanox Quantum Plugin ==<br />
The Openstack Mellanox Quantum plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. <br />
Mellanox Quantum Plugin allows hardware vNICs (based on SR-IOV virtual functions) per each Virtual Machine vNIC to have its unique <br />
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).<br />
<br />
Hardware-based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.<br />
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality <br />
<br />
This plugin is implemented according to Plugin-Agent pattern.<br />
<br />
<br />
+-----------------+ +--------------+<br />
| Controller node | | Compute node |<br />
+-----------------------------------+ +-----------------------------------+<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
| | | | | | | | | zmq | | |<br />
| | Openstack | v2.0 | Mellanox | | RPC | | Mellanox |REQ/REP| Mellanox | |<br />
| | Quantum +------+ Quantum +-----------+ Quantum +-------+ Embedded | |<br />
| | | | Plugin | | | | Agent | | Switch | |<br />
| | | | | | | | | | (NIC) | |<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
+-----------------------------------+ +-----------------------------------+<br />
<br />
* Openstack Mellanox Quantum Plugin implements the Quantum v2.0 API.<br />
* Mellanox Quantum Plugin processes the Quantum API calls and manages network segmentation ID allocation. <br />
* The plugin uses databases to store configuration and allocation mapping.<br />
* The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.<br />
* Mellanox Openstack Quantum Agent (L2 Agent) runs on each compute node. <br />
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.<br />
<br />
== Mellanox Nova VIF Driver ==<br />
Mellanox Nova VIF driver should be used when running Mellanox Quantum Plugin. VIF driver supports VIF plugging by binding vNIC (Para-virtualized or SR-IOV with optional RDMA guest access) to the Embedded Switch port.<br />
<br />
== Prerequisites ==<br />
The following are the Mellanox Quantum Plugin prerequisites:<br />
<br />
1. Compute nodes should be equiped with Mellanox ConnectX®-2/ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])<br />
<br />
2. Mellanox OFED 2.0 is installed. Contact [mailto:openstack@mellanox.com?subject=MLNX_OFED2.0 openstack@mellanox.com] to retreive this version.<br />
Refer to Mellanox website for the latest OFED documentation ([http://www.mellanox.com/page/products_dyn?product_family=26 link])<br />
<br />
3. The software package python-zmq ([https://github.com/zeromq/pyzmq github]) must be installed.<br />
<br />
4. The software package iproute2 - ([http://www.linuxgrill.com/anonymous/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation]) must be installed.<br />
<br />
5. The software package ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code]) must be installed.<br />
<br />
6. RH 6.3 or above.<br />
<br />
7. Ubuntu 11.10 or above (Future).<br />
<br />
== Code Structure ==<br />
<br />
Mellanox Quantum Plugin and supporting nova VIF driver are located at [http://github.com/mellanox-openstack Mellanox openStack]<br />
<br />
1. Quantum Plugin package structure:<br />
quantum/etc/quantum/plugins/mlnx -plugin configuration<br />
mlnx_conf.ini - sample plugin configuration<br />
<br />
quantum/quantum/plugins/mlnx - plugin code<br />
/agent - Agent code<br />
/common - common code<br />
/db - plugin persistency model and wrapping methods<br />
mlnx_plugin.py - Mellanox Openstack Plugin<br />
rpc_callbacks.py - RPC handler for received messages<br />
agent_notify_api.py - Agent RPC notify methods<br />
<br />
Mellanox Quantum Plugin is located under /quantum/quantum/plugins/ <br />
<br />
2. Nova VIF Driver package structure is:<br />
nova/nova/mlnx - nova vif driver code<br />
<br />
Mellanox nova VIF driver is located under /nova/virt/libvirt/<br />
<br />
= Mellanox Quantum Plugin Installation =<br />
== On the Quantum Server Node ==<br />
1. Copy Mellanox openstack plugin to installed quantum plugins directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins)<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins<br />
<br />
2. Modify the /etc/quantum/quantum.conf file.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
3. Copy the Mellanox plugin configuration. <br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.<br />
<br />
5. Run the server <br />
#quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
or <br />
#/etc/init.d/quantum-server start<br />
<br />
== On Compute Nodes ==<br />
<br />
=== Nova-compute ===<br />
<br />
1. Copy the nova Mellanox vifDriver<br />
#cp -a mellanox-quantum-plugin/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt<br />
<br />
2. Modify nova.conf<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
vnic_type=direct - can be either 'direct' or 'hostdev'<br />
fabric=default - specifies physical network for vNICs (currently support one fabric per node)<br />
<br />
3. Restart nova<br />
<br />
=== The eswitchd daemon ===<br />
1. Copy daemon files<br />
#cp -a daemon /opt/mlnx_daemon<br />
<br />
2. Copy the configuration file and modify it according to your environment<br />
#mkdir /etc/mlnx_daemon<br />
#cp /opt/mlnx_daemon/etc/mlnx_daemon.conf /etc/mlnx_daemon<br />
3. Run the daemon:<br />
/opt/mlnx_daemon/eswitch_daemon.py<br />
<br />
=== Quantum Agent ===<br />
1. Copy Mellanox openstack agent<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.6/site-packages/quantum/plugins<br />
<br />
2. Copy the quantum.conf and mlnx_conf.ini file to the compute node<br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
3. Modify the Quantum Agent configuration at /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
4. Run the agent<br />
#python /usr/lib/python2.6/site-packages/quantum/plugins/mlnx/agent/eswitch_quantum_agent.py --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
= Mellanox Quantum Plugin Configuration =<br />
== Quantum Configuration ==<br />
1. Make the Mellanox plugin the current quantum plugin by edit quantum.conf and change the core_plugin<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
2. Database configuration<br />
MySQL should be installed on the central server. A database named quantum should be created<br />
<br />
3. Plugin configuration<br />
Edit the configuration file: etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
On central server node<br />
<br />
[DATABASE]<br />
sql_connection - must match the mysql configuration<br />
<br />
[VLANS]<br />
tenant_network_type - must be set on of supported tenant network types<br />
network_vlan_ranges - must be configured to specify the names of the physical networks<br />
managed by the mellanox plugin, along with the ranges of VLAN IDs<br />
available on each physical network for allocation to virtual networks. <br />
<br />
On compute node(s)<br />
<br />
[AGENT]<br />
polling_interval - interfval to poll for existing vNICs<br />
rpc - must be set to True<br />
<br />
[ESWITCH]<br />
physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. <br />
<br />
For Plugin consfiguration file example, please refer to [http://github.com/mellanox-openstack/mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini Mellanox config *ini file]<br />
<br />
== Nova Configuration (compute node(s)) ==<br />
------------------------------------<br />
Edit the nova.conf file <br />
1. Configure the vif driver, and libvirt/vif type<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
connection_type=libvirt<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
2. Configure vnic_type ('direct' or 'hostdev')<br />
vnic_type= direct <br />
3. Define Embedded Switch managed physical network (currently single fabric on node)<br />
fabric=default - specifies physical network for vNICs<br />
4. Enable DHCP server to allow VMs to acquire IPs <br />
quantum_use_dhcp=true<br />
<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]<br />
<br />
2. [http://www.mellanox.com/openstack/ https://github.com/mellanox-openstack]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Quantum&diff=18499Mellanox-Quantum2013-02-28T08:27:29Z<p>Amir Naddaf: /* Mellanox Quantum Plugin */</p>
<hr />
<div><br />
<br />
= Overview = <br />
== Mellanox Quantum Plugin ==<br />
The Openstack Mellanox Quantum plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. <br />
Mellanox Quantum Plugin allows hardware vNICs (based on SR-IOV virtual functions) per each Virtual Machine vNIC to have its unique <br />
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).<br />
<br />
Hardware-based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.<br />
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality <br />
<br />
This plugin is implemented according to Plugin-Agent pattern.<br />
<br />
<br />
+-----------------+ +--------------+<br />
| Controller node | | Compute node |<br />
+-----------------------------------+ +-----------------------------------+<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
| | | | | | | | | zmq | | |<br />
| | Openstack | v2.0 | Mellanox | | RPC | | Mellanox |REQ/REP| Mellanox | |<br />
| | Quantum +------+ Quantum +-----------+ Quantum +-------+ Embedded | |<br />
| | | | Plugin | | | | Agent | | Switch | |<br />
| | | | | | | | | | (NIC) | |<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
+-----------------------------------+ +-----------------------------------+<br />
<br />
* Openstack Mellanox Quantum Plugin implements the Quantum v2.0 API.<br />
* Mellanox Quantum Plugin processes the Quantum API calls and manages network segmentation ID allocation. <br />
* The plugin uses databases to store configuration and allocation mapping.<br />
* The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.<br />
* Mellanox Openstack Quantum Agent (L2 Agent) runs on each compute node. <br />
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.<br />
<br />
== Mellanox Nova VIF Driver ==<br />
Mellanox Nova VIF driver should be used when running Mellanox Quantum Plugin. VIF driver supports VIF plugging by binding vNIC (Para-virtualized or SR-IOV with optional RDMA guest access) to the Embedded Switch port.<br />
<br />
== Prerequisite ==<br />
The following are the Mellanox Quantum Plugin prerequisites:<br />
<br />
1. Compute nodes should be equiped with Mellanox ConnectX®-2/ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])<br />
<br />
2. Mellanox OFED 2.0 is installed. Contact [mailto:openstack@mellanox.com?subject=MLNX_OFED2.0 openstack@mellanox.com] to retreive this version.<br />
Refer to Mellanox website for the latest OFED documentation ([http://www.mellanox.com/page/products_dyn?product_family=26 link])<br />
<br />
3. python-zmq ([https://github.com/zeromq/pyzmq github])<br />
<br />
4. iproute2 - ([http://www.linuxgrill.com/anonymous/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation])<br />
<br />
5. ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code])<br />
<br />
6. RH 6.3 or above<br />
<br />
7. Ubuntu 11.10 or above (Future)<br />
<br />
== Code Structure ==<br />
<br />
Mellanox Quantum Plugin and supporting nova VIF driver are located at [http://github.com/mellanox-openstack Mellanox openStack]<br />
<br />
1. Quantum Plugin package structure:<br />
quantum/etc/quantum/plugins/mlnx -plugin configuration<br />
mlnx_conf.ini - sample plugin configuration<br />
<br />
quantum/quantum/plugins/mlnx - plugin code<br />
/agent - Agent code<br />
/common - common code<br />
/db - plugin persistency model and wrapping methods<br />
mlnx_plugin.py - Mellanox Openstack Plugin<br />
rpc_callbacks.py - RPC handler for received messages<br />
agent_notify_api.py - Agent RPC notify methods<br />
<br />
Mellanox Quantum Plugin is located under /quantum/quantum/plugins/ <br />
<br />
2. Nova VIF Driver package structure is:<br />
nova/nova/mlnx - nova vif driver code<br />
<br />
Mellanox nova VIF driver is located under /nova/virt/libvirt/<br />
<br />
= Mellanox Quantum Plugin Installation =<br />
== On the Quantum Server Node ==<br />
1. Copy Mellanox openstack plugin to installed quantum plugins directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins)<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins<br />
<br />
2. Modify the /etc/quantum/quantum.conf file.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
3. Copy the Mellanox plugin configuration. <br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.<br />
<br />
5. Run the server <br />
#quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
or <br />
#/etc/init.d/quantum-server start<br />
<br />
== On Compute Nodes ==<br />
<br />
=== Nova-compute ===<br />
<br />
1. Copy the nova Mellanox vifDriver<br />
#cp -a mellanox-quantum-plugin/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt<br />
<br />
2. Modify nova.conf<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
vnic_type=direct - can be either 'direct' or 'hostdev'<br />
fabric=default - specifies physical network for vNICs (currently support one fabric per node)<br />
<br />
3. Restart nova<br />
<br />
=== The eswitchd daemon ===<br />
1. Copy daemon files<br />
#cp -a daemon /opt/mlnx_daemon<br />
<br />
2. Copy the configuration file and modify it according to your environment<br />
#mkdir /etc/mlnx_daemon<br />
#cp /opt/mlnx_daemon/etc/mlnx_daemon.conf /etc/mlnx_daemon<br />
3. Run the daemon:<br />
/opt/mlnx_daemon/eswitch_daemon.py<br />
<br />
=== Quantum Agent ===<br />
1. Copy Mellanox openstack agent<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.6/site-packages/quantum/plugins<br />
<br />
2. Copy the quantum.conf and mlnx_conf.ini file to the compute node<br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
3. Modify the Quantum Agent configuration at /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
4. Run the agent<br />
#python /usr/lib/python2.6/site-packages/quantum/plugins/mlnx/agent/eswitch_quantum_agent.py --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
= Mellanox Quantum Plugin Configuration =<br />
== Quantum Configuration ==<br />
1. Make the Mellanox plugin the current quantum plugin by edit quantum.conf and change the core_plugin<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
2. Database configuration<br />
MySQL should be installed on the central server. A database named quantum should be created<br />
<br />
3. Plugin configuration<br />
Edit the configuration file: etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
On central server node<br />
<br />
[DATABASE]<br />
sql_connection - must match the mysql configuration<br />
<br />
[VLANS]<br />
tenant_network_type - must be set on of supported tenant network types<br />
network_vlan_ranges - must be configured to specify the names of the physical networks<br />
managed by the mellanox plugin, along with the ranges of VLAN IDs<br />
available on each physical network for allocation to virtual networks. <br />
<br />
On compute node(s)<br />
<br />
[AGENT]<br />
polling_interval - interfval to poll for existing vNICs<br />
rpc - must be set to True<br />
<br />
[ESWITCH]<br />
physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. <br />
<br />
For Plugin consfiguration file example, please refer to [http://github.com/mellanox-openstack/mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini Mellanox config *ini file]<br />
<br />
== Nova Configuration (compute node(s)) ==<br />
------------------------------------<br />
Edit the nova.conf file <br />
1. Configure the vif driver, and libvirt/vif type<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
connection_type=libvirt<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
2. Configure vnic_type ('direct' or 'hostdev')<br />
vnic_type= direct <br />
3. Define Embedded Switch managed physical network (currently single fabric on node)<br />
fabric=default - specifies physical network for vNICs<br />
4. Enable DHCP server to allow VMs to acquire IPs <br />
quantum_use_dhcp=true<br />
<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]<br />
<br />
2. [http://www.mellanox.com/openstack/ https://github.com/mellanox-openstack]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Quantum&diff=18498Mellanox-Quantum2013-02-28T07:58:04Z<p>Amir Naddaf: /* Mellanox Quantum Plugin */</p>
<hr />
<div><br />
<br />
= Overview = <br />
== Mellanox Quantum Plugin ==<br />
Openstack Mellanox Quantum plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. <br />
Mellanox Quantum Plugin allows hardware vNICs (based on SR-IOV Virtual Functions) per each Virtual Machine vNIC to have its unique <br />
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).<br />
<br />
Hardware based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.<br />
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality <br />
<br />
This plugin is implemented according to Plugin-Agent pattern.<br />
<br />
<br />
+-----------------+ +--------------+<br />
| Controller node | | Compute node |<br />
+-----------------------------------+ +-----------------------------------+<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
| | | | | | | | | zmq | | |<br />
| | Openstack | v2.0 | Mellanox | | RPC | | Mellanox |REQ/REP| Mellanox | |<br />
| | Quantum +------+ Quantum +-----------+ Quantum +-------+ Embedded | |<br />
| | | | Plugin | | | | Agent | | Switch | |<br />
| | | | | | | | | | (NIC) | |<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
+-----------------------------------+ +-----------------------------------+<br />
<br />
* Openstack Mellanox Quantum Plugin implements the Quantum v2.0 API.<br />
* Mellanox Quantum Plugin processes the Quantum API calls and manages network segmentation ID allocation. <br />
* The plugin uses databases to store configuration and allocation mapping.<br />
* The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.<br />
* Mellanox Openstack Quantum Agent (L2 Agent) runs on each compute node. <br />
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.<br />
<br />
== Mellanox Nova VIF Driver ==<br />
Mellanox Nova VIF driver should be used when running Mellanox Quantum Plugin. VIF driver supports VIF plugging by binding vNIC (Para-virtualized or SR-IOV with optional RDMA guest access) to the Embedded Switch port.<br />
<br />
== Prerequisite ==<br />
The following are the Mellanox Quantum Plugin prerequisites:<br />
<br />
1. Compute nodes should be equiped with Mellanox ConnectX®-2/ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])<br />
<br />
2. Mellanox OFED 2.0 is installed. Contact [mailto:openstack@mellanox.com?subject=MLNX_OFED2.0 openstack@mellanox.com] to retreive this version.<br />
Refer to Mellanox website for the latest OFED documentation ([http://www.mellanox.com/page/products_dyn?product_family=26 link])<br />
<br />
3. python-zmq ([https://github.com/zeromq/pyzmq github])<br />
<br />
4. iproute2 - ([http://www.linuxgrill.com/anonymous/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation])<br />
<br />
5. ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code])<br />
<br />
6. RH 6.3 or above<br />
<br />
7. Ubuntu 11.10 or above (Future)<br />
<br />
== Code Structure ==<br />
<br />
Mellanox Quantum Plugin and supporting nova VIF driver are located at [http://github.com/mellanox-openstack Mellanox openStack]<br />
<br />
1. Quantum Plugin package structure:<br />
quantum/etc/quantum/plugins/mlnx -plugin configuration<br />
mlnx_conf.ini - sample plugin configuration<br />
<br />
quantum/quantum/plugins/mlnx - plugin code<br />
/agent - Agent code<br />
/common - common code<br />
/db - plugin persistency model and wrapping methods<br />
mlnx_plugin.py - Mellanox Openstack Plugin<br />
rpc_callbacks.py - RPC handler for received messages<br />
agent_notify_api.py - Agent RPC notify methods<br />
<br />
Mellanox Quantum Plugin is located under /quantum/quantum/plugins/ <br />
<br />
2. Nova VIF Driver package structure is:<br />
nova/nova/mlnx - nova vif driver code<br />
<br />
Mellanox nova VIF driver is located under /nova/virt/libvirt/<br />
<br />
= Mellanox Quantum Plugin Installation =<br />
== On the Quantum Server Node ==<br />
1. Copy Mellanox openstack plugin to installed quantum plugins directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins)<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins<br />
<br />
2. Modify the /etc/quantum/quantum.conf file.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
3. Copy the Mellanox plugin configuration. <br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.<br />
<br />
5. Run the server <br />
#quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
or <br />
#/etc/init.d/quantum-server start<br />
<br />
== On Compute Nodes ==<br />
<br />
=== Nova-compute ===<br />
<br />
1. Copy the nova Mellanox vifDriver<br />
#cp -a mellanox-quantum-plugin/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt<br />
<br />
2. Modify nova.conf<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
vnic_type=direct - can be either 'direct' or 'hostdev'<br />
fabric=default - specifies physical network for vNICs (currently support one fabric per node)<br />
<br />
3. Restart nova<br />
<br />
=== The eswitchd daemon ===<br />
1. Copy daemon files<br />
#cp -a daemon /opt/mlnx_daemon<br />
<br />
2. Copy the configuration file and modify it according to your environment<br />
#mkdir /etc/mlnx_daemon<br />
#cp /opt/mlnx_daemon/etc/mlnx_daemon.conf /etc/mlnx_daemon<br />
3. Run the daemon:<br />
/opt/mlnx_daemon/eswitch_daemon.py<br />
<br />
=== Quantum Agent ===<br />
1. Copy Mellanox openstack agent<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.6/site-packages/quantum/plugins<br />
<br />
2. Copy the quantum.conf and mlnx_conf.ini file to the compute node<br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
3. Modify the Quantum Agent configuration at /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
4. Run the agent<br />
#python /usr/lib/python2.6/site-packages/quantum/plugins/mlnx/agent/eswitch_quantum_agent.py --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
= Mellanox Quantum Plugin Configuration =<br />
== Quantum Configuration ==<br />
1. Make the Mellanox plugin the current quantum plugin by edit quantum.conf and change the core_plugin<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
2. Database configuration<br />
MySQL should be installed on the central server. A database named quantum should be created<br />
<br />
3. Plugin configuration<br />
Edit the configuration file: etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
On central server node<br />
<br />
[DATABASE]<br />
sql_connection - must match the mysql configuration<br />
<br />
[VLANS]<br />
tenant_network_type - must be set on of supported tenant network types<br />
network_vlan_ranges - must be configured to specify the names of the physical networks<br />
managed by the mellanox plugin, along with the ranges of VLAN IDs<br />
available on each physical network for allocation to virtual networks. <br />
<br />
On compute node(s)<br />
<br />
[AGENT]<br />
polling_interval - interfval to poll for existing vNICs<br />
rpc - must be set to True<br />
<br />
[ESWITCH]<br />
physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. <br />
<br />
For Plugin consfiguration file example, please refer to [http://github.com/mellanox-openstack/mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini Mellanox config *ini file]<br />
<br />
== Nova Configuration (compute node(s)) ==<br />
------------------------------------<br />
Edit the nova.conf file <br />
1. Configure the vif driver, and libvirt/vif type<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
connection_type=libvirt<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
2. Configure vnic_type ('direct' or 'hostdev')<br />
vnic_type= direct <br />
3. Define Embedded Switch managed physical network (currently single fabric on node)<br />
fabric=default - specifies physical network for vNICs<br />
4. Enable DHCP server to allow VMs to acquire IPs <br />
quantum_use_dhcp=true<br />
<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]<br />
<br />
2. [http://www.mellanox.com/openstack/ https://github.com/mellanox-openstack]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Quantum&diff=18497Mellanox-Quantum2013-02-28T07:53:14Z<p>Amir Naddaf: /* Overview */</p>
<hr />
<div><br />
<br />
= Overview = <br />
== Mellanox Quantum Plugin ==<br />
Openstack Mellanox Quantum plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) HCA. <br />
Mellanox Quantum Plugin allows hardware vNICs (based on SR-IOV Virtual Functions) per each Virtual Machine vNIC to have its unique <br />
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA (remote direct memory access).<br />
<br />
Hardware based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.<br />
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality <br />
<br />
This plugin is implemented according to Plugin-Agent pattern.<br />
<br />
<br />
+-----------------+ +--------------+<br />
| Controller node | | Compute node |<br />
+-----------------------------------+ +-----------------------------------+<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
| | | | | | | | | zmq | | |<br />
| | Openstack | v2.0 | Mellanox | | RPC | | Mellanox |REQ/REP| Mellanox | |<br />
| | Quantum +------+ Quantum +-----------+ Quantum +-------+ Embedded | |<br />
| | | | Plugin | | | | Agent | | Switch | |<br />
| | | | | | | | | | (NIC) | |<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
+-----------------------------------+ +-----------------------------------+<br />
<br />
* Openstack Mellanox Quantum Plugin implements the Quantum v2.0 API.<br />
* Mellanox Quantum Plugin processes the Quantum API calls and manages network segmentation ID allocation. <br />
* The plugin uses databases to store configuration and allocation mapping.<br />
* The plugin maintains compatibility to Linux Bridge Plugin, supports DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.<br />
* Mellanox Openstack Quantum Agent (L2 Agent) should run on each compute node. <br />
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.<br />
<br />
== Mellanox Nova VIF Driver ==<br />
Mellanox Nova VIF driver should be used when running Mellanox Quantum Plugin. VIF driver supports VIF plugging by binding vNIC (Para-virtualized or SR-IOV with optional RDMA guest access) to the Embedded Switch port.<br />
<br />
== Prerequisite ==<br />
The following are the Mellanox Quantum Plugin prerequisites:<br />
<br />
1. Compute nodes should be equiped with Mellanox ConnectX®-2/ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])<br />
<br />
2. Mellanox OFED 2.0 is installed. Contact [mailto:openstack@mellanox.com?subject=MLNX_OFED2.0 openstack@mellanox.com] to retreive this version.<br />
Refer to Mellanox website for the latest OFED documentation ([http://www.mellanox.com/page/products_dyn?product_family=26 link])<br />
<br />
3. python-zmq ([https://github.com/zeromq/pyzmq github])<br />
<br />
4. iproute2 - ([http://www.linuxgrill.com/anonymous/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation])<br />
<br />
5. ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code])<br />
<br />
6. RH 6.3 or above<br />
<br />
7. Ubuntu 11.10 or above (Future)<br />
<br />
== Code Structure ==<br />
<br />
Mellanox Quantum Plugin and supporting nova VIF driver are located at [http://github.com/mellanox-openstack Mellanox openStack]<br />
<br />
1. Quantum Plugin package structure:<br />
quantum/etc/quantum/plugins/mlnx -plugin configuration<br />
mlnx_conf.ini - sample plugin configuration<br />
<br />
quantum/quantum/plugins/mlnx - plugin code<br />
/agent - Agent code<br />
/common - common code<br />
/db - plugin persistency model and wrapping methods<br />
mlnx_plugin.py - Mellanox Openstack Plugin<br />
rpc_callbacks.py - RPC handler for received messages<br />
agent_notify_api.py - Agent RPC notify methods<br />
<br />
Mellanox Quantum Plugin is located under /quantum/quantum/plugins/ <br />
<br />
2. Nova VIF Driver package structure is:<br />
nova/nova/mlnx - nova vif driver code<br />
<br />
Mellanox nova VIF driver is located under /nova/virt/libvirt/<br />
<br />
= Mellanox Quantum Plugin Installation =<br />
== On the Quantum Server Node ==<br />
1. Copy Mellanox openstack plugin to installed quantum plugins directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins)<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins<br />
<br />
2. Modify the /etc/quantum/quantum.conf file.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
3. Copy the Mellanox plugin configuration. <br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.<br />
<br />
5. Run the server <br />
#quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
or <br />
#/etc/init.d/quantum-server start<br />
<br />
== On Compute Nodes ==<br />
<br />
=== Nova-compute ===<br />
<br />
1. Copy the nova Mellanox vifDriver<br />
#cp -a mellanox-quantum-plugin/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt<br />
<br />
2. Modify nova.conf<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
vnic_type=direct - can be either 'direct' or 'hostdev'<br />
fabric=default - specifies physical network for vNICs (currently support one fabric per node)<br />
<br />
3. Restart nova<br />
<br />
=== The eswitchd daemon ===<br />
1. Copy daemon files<br />
#cp -a daemon /opt/mlnx_daemon<br />
<br />
2. Copy the configuration file and modify it according to your environment<br />
#mkdir /etc/mlnx_daemon<br />
#cp /opt/mlnx_daemon/etc/mlnx_daemon.conf /etc/mlnx_daemon<br />
3. Run the daemon:<br />
/opt/mlnx_daemon/eswitch_daemon.py<br />
<br />
=== Quantum Agent ===<br />
1. Copy Mellanox openstack agent<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.6/site-packages/quantum/plugins<br />
<br />
2. Copy the quantum.conf and mlnx_conf.ini file to the compute node<br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
3. Modify the Quantum Agent configuration at /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
4. Run the agent<br />
#python /usr/lib/python2.6/site-packages/quantum/plugins/mlnx/agent/eswitch_quantum_agent.py --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
= Mellanox Quantum Plugin Configuration =<br />
== Quantum Configuration ==<br />
1. Make the Mellanox plugin the current quantum plugin by edit quantum.conf and change the core_plugin<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
2. Database configuration<br />
MySQL should be installed on the central server. A database named quantum should be created<br />
<br />
3. Plugin configuration<br />
Edit the configuration file: etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
On central server node<br />
<br />
[DATABASE]<br />
sql_connection - must match the mysql configuration<br />
<br />
[VLANS]<br />
tenant_network_type - must be set on of supported tenant network types<br />
network_vlan_ranges - must be configured to specify the names of the physical networks<br />
managed by the mellanox plugin, along with the ranges of VLAN IDs<br />
available on each physical network for allocation to virtual networks. <br />
<br />
On compute node(s)<br />
<br />
[AGENT]<br />
polling_interval - interfval to poll for existing vNICs<br />
rpc - must be set to True<br />
<br />
[ESWITCH]<br />
physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. <br />
<br />
For Plugin consfiguration file example, please refer to [http://github.com/mellanox-openstack/mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini Mellanox config *ini file]<br />
<br />
== Nova Configuration (compute node(s)) ==<br />
------------------------------------<br />
Edit the nova.conf file <br />
1. Configure the vif driver, and libvirt/vif type<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
connection_type=libvirt<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
2. Configure vnic_type ('direct' or 'hostdev')<br />
vnic_type= direct <br />
3. Define Embedded Switch managed physical network (currently single fabric on node)<br />
fabric=default - specifies physical network for vNICs<br />
4. Enable DHCP server to allow VMs to acquire IPs <br />
quantum_use_dhcp=true<br />
<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]<br />
<br />
2. [http://www.mellanox.com/openstack/ https://github.com/mellanox-openstack]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddafhttps://wiki.openstack.org/w/index.php?title=Mellanox-Quantum&diff=18496Mellanox-Quantum2013-02-28T07:44:58Z<p>Amir Naddaf: /* Mellanox Quantum Plugin */</p>
<hr />
<div><br />
<br />
= Overview = <br />
== Mellanox Quantum Plugin ==<br />
Openstack Mellanox Quantum plugin supports Mellanox embedded switch functionality as part of the VPI (Ethernet/InfiniBand) CA. <br />
Mellanox Quantum Plugin allows hardware vNICs (based on SR-IOV Virtual Functions) per each Virtual Machine vNIC to have its unique <br />
connectivity, security, and QoS attributes. Hardware vNICs can be mapped to the guest VMs through para-virtualization (using a Tap device), or directly as a Virtual PCI device to the guest, allowing higher performance and advanced features such as RDMA.<br />
<br />
Hardware based switching, provides better performance, functionality, and security/isolation for virtual cloud environments.<br />
Future versions of the plug-in will include OpenFlow API to control and monitor the embedded switch and vNICs functionality <br />
<br />
This plugin is implemented according to Plugin-Agent pattern.<br />
<br />
<br />
+-----------------+ +--------------+<br />
| Controller node | | Compute node |<br />
+-----------------------------------+ +-----------------------------------+<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
| | | | | | | | | zmq | | |<br />
| | Openstack | v2.0 | Mellanox | | RPC | | Mellanox |REQ/REP| Mellanox | |<br />
| | Quantum +------+ Quantum +-----------+ Quantum +-------+ Embedded | |<br />
| | | | Plugin | | | | Agent | | Switch | |<br />
| | | | | | | | | | (NIC) | |<br />
| +-----------+ +----------+ | | +----------+ +----------+ |<br />
+-----------------------------------+ +-----------------------------------+<br />
<br />
* Openstack Mellanox Quantum Plugin implements the Quantum v2.0 API.<br />
* Mellanox Quantum Plugin processes the Quantum API calls and manages network segmentation ID allocation. <br />
* The plugin uses Databsase to store configuration and allocation mapping.<br />
* The plugin maintains compatibility to Linux Bridge Plugin support DHCP and L3 Agents by running L2 Linux Bridge Agent on Network Node.<br />
* Mellanox Openstack Quantum Agent (L2 Agent) should run on each compute node. <br />
* Agent should apply VIF connectivity based on mapping between a VIF (VM vNIC) and Embedded Switch port.<br />
<br />
== Mellanox Nova VIF Driver ==<br />
Mellanox Nova VIF driver should be used when running Mellanox Quantum Plugin. VIF driver supports VIF plugging by binding vNIC (Para-virtualized or SR-IOV with optional RDMA guest access) to the Embedded Switch port.<br />
<br />
== Prerequisite ==<br />
The following are the Mellanox Quantum Plugin prerequisites:<br />
<br />
1. Compute nodes should be equiped with Mellanox ConnectX®-2/ConnectX®-3 Network Adapter ([http://www.mellanox.com/page/infiniband_cards_overview link])<br />
<br />
2. Mellanox OFED 2.0 is installed. Contact [mailto:openstack@mellanox.com?subject=MLNX_OFED2.0 openstack@mellanox.com] to retreive this version.<br />
Refer to Mellanox website for the latest OFED documentation ([http://www.mellanox.com/page/products_dyn?product_family=26 link])<br />
<br />
3. python-zmq ([https://github.com/zeromq/pyzmq github])<br />
<br />
4. iproute2 - ([http://www.linuxgrill.com/anonymous/iproute2/ Code] [http://www.policyrouting.org/iproute2.doc.html Documentation])<br />
<br />
5. ethtool ([http://www.kernel.org/pub/software/network/ethtool/ Code])<br />
<br />
6. RH 6.3 or above<br />
<br />
7. Ubuntu 11.10 or above (Future)<br />
<br />
== Code Structure ==<br />
<br />
Mellanox Quantum Plugin and supporting nova VIF driver are located at [http://github.com/mellanox-openstack Mellanox openStack]<br />
<br />
1. Quantum Plugin package structure:<br />
quantum/etc/quantum/plugins/mlnx -plugin configuration<br />
mlnx_conf.ini - sample plugin configuration<br />
<br />
quantum/quantum/plugins/mlnx - plugin code<br />
/agent - Agent code<br />
/common - common code<br />
/db - plugin persistency model and wrapping methods<br />
mlnx_plugin.py - Mellanox Openstack Plugin<br />
rpc_callbacks.py - RPC handler for received messages<br />
agent_notify_api.py - Agent RPC notify methods<br />
<br />
Mellanox Quantum Plugin is located under /quantum/quantum/plugins/ <br />
<br />
2. Nova VIF Driver package structure is:<br />
nova/nova/mlnx - nova vif driver code<br />
<br />
Mellanox nova VIF driver is located under /nova/virt/libvirt/<br />
<br />
= Mellanox Quantum Plugin Installation =<br />
== On the Quantum Server Node ==<br />
1. Copy Mellanox openstack plugin to installed quantum plugins directory (usually /usr/lib/python2.7/dist-packages/quantum/plugins)<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.7/dist-packages/quantum/plugins<br />
<br />
2. Modify the /etc/quantum/quantum.conf file.<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
3. Copy the Mellanox plugin configuration. <br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
4. Modify the /etc/quantum/plugins/mlnx/mlnx_conf.ini file to reflect your environment.<br />
<br />
5. Run the server <br />
#quantum-server --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
or <br />
#/etc/init.d/quantum-server start<br />
<br />
== On Compute Nodes ==<br />
<br />
=== Nova-compute ===<br />
<br />
1. Copy the nova Mellanox vifDriver<br />
#cp -a mellanox-quantum-plugin/nova/nova/virt/libvirt/mlnx /usr/lib/python2.6/site-packages/nova/virt/libvirt<br />
<br />
2. Modify nova.conf<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
vnic_type=direct - can be either 'direct' or 'hostdev'<br />
fabric=default - specifies physical network for vNICs (currently support one fabric per node)<br />
<br />
3. Restart nova<br />
<br />
=== The eswitchd daemon ===<br />
1. Copy daemon files<br />
#cp -a daemon /opt/mlnx_daemon<br />
<br />
2. Copy the configuration file and modify it according to your environment<br />
#mkdir /etc/mlnx_daemon<br />
#cp /opt/mlnx_daemon/etc/mlnx_daemon.conf /etc/mlnx_daemon<br />
3. Run the daemon:<br />
/opt/mlnx_daemon/eswitch_daemon.py<br />
<br />
=== Quantum Agent ===<br />
1. Copy Mellanox openstack agent<br />
#cp -a mellanox-quantum-plugin/quantum/quantum/plugins/mlnx /usr/lib/python2.6/site-packages/quantum/plugins<br />
<br />
2. Copy the quantum.conf and mlnx_conf.ini file to the compute node<br />
#mkdir -p /etc/quantum/plugins/mlnx<br />
#cp mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini /etc/quantum/plugins/mlnx<br />
<br />
3. Modify the Quantum Agent configuration at /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
4. Run the agent<br />
#python /usr/lib/python2.6/site-packages/quantum/plugins/mlnx/agent/eswitch_quantum_agent.py --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
= Mellanox Quantum Plugin Configuration =<br />
== Quantum Configuration ==<br />
1. Make the Mellanox plugin the current quantum plugin by edit quantum.conf and change the core_plugin<br />
core_plugin = quantum.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin<br />
<br />
2. Database configuration<br />
MySQL should be installed on the central server. A database named quantum should be created<br />
<br />
3. Plugin configuration<br />
Edit the configuration file: etc/quantum/plugins/mlnx/mlnx_conf.ini<br />
<br />
On central server node<br />
<br />
[DATABASE]<br />
sql_connection - must match the mysql configuration<br />
<br />
[VLANS]<br />
tenant_network_type - must be set on of supported tenant network types<br />
network_vlan_ranges - must be configured to specify the names of the physical networks<br />
managed by the mellanox plugin, along with the ranges of VLAN IDs<br />
available on each physical network for allocation to virtual networks. <br />
<br />
On compute node(s)<br />
<br />
[AGENT]<br />
polling_interval - interfval to poll for existing vNICs<br />
rpc - must be set to True<br />
<br />
[ESWITCH]<br />
physical_interface_mapping - the network_interface_mappings maps each physical network name to the physical interface (on top of Mellanox Adapter) connecting the node to that physical network. <br />
<br />
For Plugin consfiguration file example, please refer to [http://github.com/mellanox-openstack/mellanox-quantum-plugin/quantum/etc/quantum/plugins/mlnx/mlnx_conf.ini Mellanox config *ini file]<br />
<br />
== Nova Configuration (compute node(s)) ==<br />
------------------------------------<br />
Edit the nova.conf file <br />
1. Configure the vif driver, and libvirt/vif type<br />
compute_driver=nova.virt.libvirt.driver.LibvirtDriver<br />
connection_type=libvirt<br />
libvirt_vif_driver=nova.virt.libvirt.mlnx.vif.MlxEthVIFDriver<br />
2. Configure vnic_type ('direct' or 'hostdev')<br />
vnic_type= direct <br />
3. Define Embedded Switch managed physical network (currently single fabric on node)<br />
fabric=default - specifies physical network for vNICs<br />
4. Enable DHCP server to allow VMs to acquire IPs <br />
quantum_use_dhcp=true<br />
<br />
<br />
= References =<br />
1. [http://www.mellanox.com/openstack/ http://www.mellanox.com/openstack/]<br />
<br />
2. [http://www.mellanox.com/openstack/ https://github.com/mellanox-openstack]<br />
<br />
3. [http://www.mellanox.com/page/products_dyn?product_family=26 Mellanox OFED]<br />
<br />
For more details, please refer your question to [mailto:openstack@mellanox.com openstack@mellanox.com]<br />
<br />
Return to [https://wiki.openstack.org/wiki/Mellanox Mellanox] wiki page.</div>Amir Naddaf