Jump to: navigation, search

Difference between revisions of "SR-IOV-Passthrough-For-Networking"

m (SR-IOV Networking in OpenStack Juno)
m
Line 1: Line 1:
 
=SR-IOV Networking in OpenStack Juno=  
 
=SR-IOV Networking in OpenStack Juno=  
OpenStack Juno adds inbox support to request VM access to virtual network via SR-IOV NIC. With the introduction of SR-IOV based NIC, the traditional virtual bridge is no longer required. Each SR-IOV port is associated with a virtual function (VF). SR-IOV ports may be provided by Hardware-based Virtual Eithernet Bridging (HW VEB); or they may be extended to an upstream physical switch (IEEE 802.1br).   
+
OpenStack Juno adds inbox support to request VM access to virtual network via SR-IOV NIC. With the introduction of SR-IOV based NICs, the traditional virtual bridge is no longer required. Each SR-IOV port is associated with a virtual function (VF). SR-IOV ports may be provided by Hardware-based Virtual Ethernet Bridging (HW VEB); or they may be extended to an upstream physical switch (IEEE 802.1br).   
 
There are two ways that SR-IOV port may be connected:
 
There are two ways that SR-IOV port may be connected:
 
* directly connected to its VF
 
* directly connected to its VF
* connected with a macvtap device that resides on the host, which is then connected to the corresponding VF
+
* connected with a macvtap device that resides on the host, which is then connected to the corresponding VF
  
 
==Nova==
 
==Nova==
Nova support for SR-IOV enables to schedule an instance with SR-IOV ports based on their network connectivity. The neutron ports' associated physical networks have to be used in making the scheduling decision.
+
Nova support for SR-IOV enables scheduling an instance with SR-IOV ports based on their network connectivity. The neutron ports' associated physical networks have to be considered in making the scheduling decision.
PCI Whitelist was enchanced to allow tags to be associated with PCI devices. PCI devices available for SR-IOV networking should be tagged with physical_network label.
+
PCI Whitelist has been enchanced to allow tags to be associated with PCI devices. PCI devices available for SR-IOV networking should be tagged with physical_network label.
  
For SR-IOV networking, a pre-defined tag "physical_network" is used to define the physical network that the devices are attached to. A whitelist entry is defined as:
+
For SR-IOV networking, a pre-defined tag "physical_network" is used to define the physical network to which the devices are attached. A whitelist entry is defined as:
 
     ["vendor_id": "<id>",] ["product_id": "<id>",]
 
     ["vendor_id": "<id>",] ["product_id": "<id>",]
 
     ["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" |
 
     ["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" |
    "devname": "Ethernet Interface Name",]   
+
    "devname": "Ethernet Interface Name",]   
 
     "physical_network":"name string of the physical network"
 
     "physical_network":"name string of the physical network"
  
<id> can be a '*' or a valid vendor/product id as displayed by the linux utility lspci. The address uses the same syntax as it's in lspci. The devname can be a valid PCI device name. The only device names that are supportedare those that are displayed by the linux utility ifconfig -a and correspond to either a PF or a VF on a vNIC.
+
<id> can be an asterisk (*) or a valid vendor/product ID as displayed by the Linux utility lspci. The address uses the same syntax as in lspci. The devname can be a valid PCI device name. The only device names that are supported are those displayed by the Linux utility ifconfig -a and correspond to either a PF or a VF on a vNIC.
  
If the device defined by the address or devname corresponds to a SR-IOV PF, all the VFs under the PF will match the entry.
+
If the device defined by the address or devname corresponds to a SR-IOV PF, all VFs under the PF will match the entry.
  
 
Multiple whitelist entries per host are supported.
 
Multiple whitelist entries per host are supported.
  
 
==Neutron==  
 
==Neutron==  
Neutron support for SR-IOV requires ML2 Plugin with SR-IOV supporting Mechanism Driver.
+
Neutron support for SR-IOV requires ML2 Plugin with SR-IOV supporting mechanism driver.
 
Currently there is ML2 Mechanism Driver for SR-IOV capable NIC based switching (HW VEB).
 
Currently there is ML2 Mechanism Driver for SR-IOV capable NIC based switching (HW VEB).
There are Network Adapters from different vendors that may differ by supporting various functionality.
+
There are network adapters from different vendors that vary by supporting various functionality.
If VF link state update is supported by vendor Network Adapter, the  SR-IOV NIC L2 Agent should be deployed to leverage this functionality .
+
If VF link state update is supported by vendor network adapter, the  SR-IOV NIC L2 agent should be deployed to leverage this functionality .
  
 
==VM creation flow with SR-IOV vNIC==  
 
==VM creation flow with SR-IOV vNIC==  
* Create one or more neutron ports
+
* Create one or more neutron ports. Run:
 
   neutron port-create <net-id> --binding:vnic-type <direct | macvtap | normal>
 
   neutron port-create <net-id> --binding:vnic-type <direct | macvtap | normal>
  
* Boot VM with one or more neutron ports
+
* Boot VM with one or more neutron ports. Run:
 
   nova boot --flavor m1.large --image <image>
 
   nova boot --flavor m1.large --image <image>
 
           --nic port-id=<port1> --nic port-id=<port2>
 
           --nic port-id=<port1> --nic port-id=<port2>
 
   
 
   
Note that in the nova boot API, users can specify either a port-id or a net-id. If it's the latter case, it's assumed that the user is requesting a normal virtual port (which is not a SR-IOV port).
+
Note that in the nova boot API, users can specify either a port-ID or a net-ID. If a net-ID is specified, it is assumed that the user is requesting a normal virtual port (which is not an SR-IOV port).
  
 
=SR-IOV Configuration=
 
=SR-IOV Configuration=

Revision as of 11:33, 12 October 2014

SR-IOV Networking in OpenStack Juno

OpenStack Juno adds inbox support to request VM access to virtual network via SR-IOV NIC. With the introduction of SR-IOV based NICs, the traditional virtual bridge is no longer required. Each SR-IOV port is associated with a virtual function (VF). SR-IOV ports may be provided by Hardware-based Virtual Ethernet Bridging (HW VEB); or they may be extended to an upstream physical switch (IEEE 802.1br). There are two ways that SR-IOV port may be connected:

  • directly connected to its VF
  • connected with a macvtap device that resides on the host, which is then connected to the corresponding VF

Nova

Nova support for SR-IOV enables scheduling an instance with SR-IOV ports based on their network connectivity. The neutron ports' associated physical networks have to be considered in making the scheduling decision. PCI Whitelist has been enchanced to allow tags to be associated with PCI devices. PCI devices available for SR-IOV networking should be tagged with physical_network label.

For SR-IOV networking, a pre-defined tag "physical_network" is used to define the physical network to which the devices are attached. A whitelist entry is defined as:

   ["vendor_id": "<id>",] ["product_id": "<id>",]
   ["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" |
   "devname": "Ethernet Interface Name",]  
   "physical_network":"name string of the physical network"

<id> can be an asterisk (*) or a valid vendor/product ID as displayed by the Linux utility lspci. The address uses the same syntax as in lspci. The devname can be a valid PCI device name. The only device names that are supported are those displayed by the Linux utility ifconfig -a and correspond to either a PF or a VF on a vNIC.

If the device defined by the address or devname corresponds to a SR-IOV PF, all VFs under the PF will match the entry.

Multiple whitelist entries per host are supported.

Neutron

Neutron support for SR-IOV requires ML2 Plugin with SR-IOV supporting mechanism driver. Currently there is ML2 Mechanism Driver for SR-IOV capable NIC based switching (HW VEB). There are network adapters from different vendors that vary by supporting various functionality. If VF link state update is supported by vendor network adapter, the SR-IOV NIC L2 agent should be deployed to leverage this functionality .

VM creation flow with SR-IOV vNIC

  • Create one or more neutron ports. Run:
  neutron port-create <net-id> --binding:vnic-type <direct | macvtap | normal>
  • Boot VM with one or more neutron ports. Run:
  nova boot --flavor m1.large --image <image>
         --nic port-id=<port1> --nic port-id=<port2>

Note that in the nova boot API, users can specify either a port-ID or a net-ID. If a net-ID is specified, it is assumed that the user is requesting a normal virtual port (which is not an SR-IOV port).

SR-IOV Configuration

Neutron Server

Using ML2 Neutron plugin modify /etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2]
tenant_network_types = vlan
type_drivers = vlan
mechanism_drivers = openvswitch,sriovnicswitch
[ml2_type_vlan]
network_vlan_ranges = default:2:100

Make sure /etc/neutron/plugins/ml2/ml2_conf_sriov.ini has the following section:

[ml2_sriov]
agent_required = True

Neutron server should be run with the two configuration files /etc/neutron/plugins/ml2/ml2_conf.in and /etc/neutron/plugins/ml2/ml2_conf_sriov.ini

neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini

Compute

Nova

On each compute you have to associate the Virtual Functions available to each Physical Network. You do it by configuring pci_passthrough_whitelist in /etc/nova/noca.conf

For example:

pci_passthrough_whitelist = {"address":"*:0a:00.*","physical_network":"physnet1"}

This will associate any VF with address that includes ':0a:00.' in it's address to the Physical network physnet1

After configuring the white list you have to restart nova-compute service.

Neutron

If the hardware supports and and you want to enable changing the port admin_state you have to run the Neutron SR-IOV agent.

Note:If you configured agent_required=True on the Neutron server you must run the Agent on each compute node.

In /etc/neutron/plugins/ml2/ml2_conf.ini make sure you have the following:

[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver

Modify /etc/neutron/plugins/ml2/ml2_conf_sriov.ini as follows

[sriov_nic]
physical_device_mappings = physnet1:eth1
exclude_devices =

physnet1 is the physical network eth1 if the Physical Function (PF) exclude_devices is empty so All the VFs associated with eth1 are allowed to be configured by the agent.

After modifying the configuration file start the Neutron SR-IOV agent:

neutron-sriov-nic-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini

Exclude VFs

If you want to exclude some of the VFs so the agent won't configure them you need to list them in the sriov_nic section:

Example: exclude_devices = eth1:0000:07:00.2; 0000:07:00.3, eth2:0000:05:00.1; 0000:05:00.2

References

Openstack ML2 SR-IOV driver support