Jump to: navigation, search

Nova-neutron-sriov

Revision as of 12:05, 29 January 2014 by Baoli (talk | contribs) (a pci-extra-attr net-group)

Nova: support Neutron SR-IOV ports

Background

This blueprint is based on the discussions as documented in this wiki [[1]]

While the blueprint [[2]] addresses the common SR-IOV support in Nova, this blueprint attempts to capture the changes in Nova in order to support the neutron SR-IOV ports.

Traditionally, a neutron port is a virtual port that is either attached to a linux bridge or an openvswitch bridge on a compute node. With the introduction of SR-IOV, the intermediate virtual bridge is no longer required. Instead, the SR-IOV port is associated with a virtual function (VF) that is supported by the vNIC adaptor. In addition, the SR-IOV port may be extended to an upstream physical switch (IEEE 802.1br), and in such case, the port's configuration takes place in that switch. The SR-IOV port can also be connected with a macvtap device that resides on the host, which is then connected with a VF on the vNIC. The benefit of using a macvtap device is that it makes live migration with SR-IOV possible. We'll use a combination of vnic-type and vif-type to support the above requirements.

In this document, we use the term neutron SR-IOV port to refer to a VF that can be configured as an Ethernet interface.

A vNIC may be configured with multiple PFs, with each of them supporting the configuration of multiple VFs. This means that neutron SR-IOV ports are limited resources on a compute node. In addition, a neutron SR-IOV port is connected to a physical network, and different SR-IOV ports may be connected to different physical networks. Therefore, a VM that requires an SR-IOV port on a particular network needs to be placed on a compute node that supports neutron SR-IOV ports on that network.

nova boot: Specify a neutron SR-IOV port

Given the limited time we have in Icehouse, we decided not to change the syntax of the nova boot API initially. In order to specify a neutron SR-IOV port for a VM, however, the semantics of the port-id parameter in the --nic option will be extended to support SR-IOV ports. With these blueprints, each neutron port will be associated with a binding:profile dictionary, in which the port's vnic-type and pci flavor are defined.

a pci-extra-attr net-group

It's assumed, and it's highly possible that the pci flavor APIs specified in this wiki [PCI Passthrough SR-IOV Support] will not be available in the Icehouse release. To support neutron SR-IOV ports in Icehouse, a pci-extra-attr net-group is defined. The values of this attribute are the names of all the physical networks that are supported in a cloud. Further, PCI stats will be collected with net-group as the grouping key.

PCI Passthrough Device List

In the wiki [PCI Passthrough SR-IOV Support], this is called PCI information. To support neutron SR-IOV ports, on a compute node, define the PCI passthrough device list so that neutron PCI devices are tagged with net-group. For example, if a compute node supports two physical networks: service_provider_net and customer_net, then in the PCI passthrough device list, PCI devices for networking can be tagged with either "net-group: service_provider_net" or "net-group: customer_net"

vnic-type

Each neutron port has a vnic-type. Three vnic types are defined:

  • virtio: the traditional virtual port
  • direct: direct pci passthrough without macvtap
  • macvtap: pci passthrough with macvtap

vif-type

Each neutron port is associated with a vif-type. Two vif-types are in our interest here:

  • VIF_TYPE_802_QBG: corresponds to IEEE 802.1QBG
  • VIF_TYPE_802_QBH: corresponds to IEEE 802.1BR (used to be IEEE 802.1Qbh)

Putting All Together

Scope of changes

  • interpret the enhanced port-id parameter. For each neutron SR-IOV port, create a PCI request
  • nova.network.neutronv2: changes required to support binding:profile
  • vif dictionary: add vlan id.
  • nova.virt.libvirt: add support to generate configs and interface XML for neutron SR-IOV ports
  • live migration: macvtap plus per interface network xml. This will be a stretch goal for the initial release