Jump to: navigation, search

PCI passthrough SRIOV support

Revision as of 08:35, 21 January 2014 by Yongli.he@intel.com (talk | contribs) (transite config file to API)

background

this design based on this PCI passthrough meeting: https://wiki.openstack.org/wiki/Meetings/Passthrough https://docs.google.com/document/d/1vadqmurlnlvZ5bv3BlUbFeXRS_wh-dsgi5plSjimWjU/edit# link back to bp: https://blueprints.launchpad.net/nova/+spec/pci-extra-info

PCI devices has PCI standard property like address(BDF), vendor_id, product_id, etc, and PCI property like every Virtual Function's physical address , it also should can be attach some application specific extra information, like physical network connectivity used by neutron SRIOV, or any other property.

All kind of these PCI property should be well classify, and for every property what's the scope it belong to should be well defined. our design will focus on several PCI modules to provide the PCI pass-through SRIOV support, it's current functionality is:

  • on compute node the whit-list define a spec which filter the PCI device, given a set of PCI device which is available for allocation.
  • PCI compute also report the PCI stats information to scheduler. PCI stats contain several pools. each pool defined by several PCI property (vendor_id, product_id, extra_info).
  • PCI alias define the user point of view PCI device selector: alias provide a set of (k,v) to form specs to select the available device(filter by white list).

PCI NEXT over all design

PCI flavor used to select available device

User use the PCI flavor to select available device. and the PCI flavor is global to all cloud, and can be configuration via API. user treat the PCI flavor as an resealable name like 'oldGPU', 'FastGPU', '10GNIC', 'SSD'.


define the flavors on Control node

Control node has flavors which allow the administrator to package up devices for users. flavors have a name, and matching expression that selects available(offer by white list) devices. flavors can overlap - that is, the same device on the same machine may be matched by multiple flavors.

PCI flavor defined by a set of (k,v), the k is the *well defined* PCI property. not every PCI property is available to PCI flavor, only a specific set of PCI property can used to define the PCI flavor. these PCI property is defined via a global configuration :

    pci_flavor_attrs = [ vendor_id, product_id, ...]

Only the global attrs should be one of pci_falvor_attrs, like the vendor, product, etc, the 'host', and 'BDF' of a pci device should not be used as pci_flavor_attrs. this is explicitly an optimization to simplify scheduling complexity, we may change this to an API-changeable value in the future, though we believe it will rarely be necessary to change its value.

Compute node offers up devices via local config

the compute nodes offer available PCI devices for pass-through, since the list of devices doesn't usually change unless someone tinkers with the hardware, this matching expression used to create this list of offered devices is stored in compute node config.

   *the device information (device ID, vendor ID, BDF etc.) is discovered from the device and stored as part of the PCI device, same as current implement.
   *on the compute node, additional arbitrary information, in the form of key-value pairs, can be added to the config and is included in the PCI device

this is achived by extend the pci white-list to: pci_information = { pci-regex,pci-extra-attrs } pci-regex is a dict of { string-key: string-value } pairs , it can only match device properties, like vendor_id, address, product_id,etc. pci-extra-attrs is a dict of { string-key: string-value } pairs. The values can be arbitrary The total size of the extra attrs may be restricted.

PCI NEXT Config

Compute host

pci_information = { pci-regex,pci-extra-attrs }

Control node

pci_flavor_attrs=[attr,attr,attr] For instance, when using device and vendor ID this would read:

    pci_flavor_attrs=device_id,vendor_id

When the backend adds an arbitrary ‘group’ attribute to all PCI devices:

    pci_flavor_attrs=e.group

When you wish to find an appropriate device and perhaps also filter by the connection tagged on that device, which you use an extra-info attribute to specify on the compute node config: pci_flavor_attrs=device_id,vendor_id,e.connection

flavor API

  • overall

nova pci-flavor-list nova pci-flavor-show name|UUID <name|UUID> nova pci-flavor-create name|UUID <name|UUID> description <desc> nova pci-flavor-update name|UUID <name|UUID> set 'description'='xxxx' 'e.group'= 'A' nova pci-flavor-delete <name|UUID> name|UUID


* list available pci flavor  (white list)
   nova pci-flavor-list 
   GET v2/​{tenant_id}​/os-pci-flavors
   data:
    os-pci-flavors{
                [ 
                           { 
                               'UUID':'xxxx-xx-xx' , 
                               'description':'xxxx' 
                               'vendor_id':'8086', 
                                 ....
                                'name':'xxx', 
                              } ,
               ]
    }


  • get detailed information about one pci-flavor:
     nova pci-flavor-show  <UUID>
    GET v2/​{tenant_id}​/os-pci-flavor/<UUID>
    data:
       os-pci-flavor: { 
                               'UUID':'xxxx-xx-xx' , 
                               'description':'xxxx' 
                                  ....
                                'name':'xxx', 
         } 
  • create pci flavor
 nova pci-flavor-create  name 'GetMePowerfulldevice'  description "xxxxx"
 API:
 POST  v2/​{tenant_id}​/os-pci-flavors
 data: 
     pci-flavor: { 
            'name':'GetMePowerfulldevice',
             description: "xxxxx" 
     }
 action:  create database entry for this flavor.


  • update the pci flavor
    nova pci-flavor-update UUID  set    'description'='xxxx'   'e.group'= 'A'
    PUT v2/​{tenant_id}​/os-pci-flavors/<UUID>
    with data  :
        { 'action': "update", 
          'pci-flavor':
                         { 
                            'description':'xxxx',
                            'vendor': '8086',
                            'e.group': 'A',
                             ....
                         }
        }
   action: set this as the new definition of the pci flavor.
  • delete a pci flavor
  nova pci-flavor-delete <UUID>
  DELETE v2/​{tenant_id}​/os-pci-flavor/<UUID>

nova command extension : --nic with pci-flavor

attaches a virtual NIC to the Neutron network and the VM nova boot --nic net-id=neutron-network,vnic-type=macvtap,pci-flavor=xxx nova boot --nic net-id=neutron-network,vnic-type=macvtap,pci-flavor=xxx

Use cases

General PCI pass through

given compute nodes contain 1 GPU with vendor:device 8086:0001

  • on the compute nodes, config the pci_information
   pci_information =  { { 'device_id': "8086", 'vendor_id': "0001" }, {} }
  • on controller
  pci_flavor_attrs = ['device_id', 'vendor_id']

the compute node would report PCI stats group by ('device_id', 'vendor_id'). pci stats will report one pool: {'device_id':'0001', 'vendor_id':'8086', 'count': 1 }

  • create PCI flavor

nova pci-flavor-create name 'bigGPU' description 'passthrough Intel's on-die GPU' nova pci-flavor-update name 'bigGPU' set 'vendor_id'='8086' 'product_id': '0001'

  • create flavor and boot with it

nova flavor-key m1.small set pci_passthrough:pci_flavor= 1:bigGPU nova boot mytest --flavor m1.tiny --image=cirros-0.3.1-x86_64-uec

General PCI pass through with multi PCI flavor candidate

given compute nodes contain 2 type GPU with , vendor:device 8086:0001, or vendor:device 8086:0002

  • on the compute nodes, config the pci_information
   pci_information =  { { 'device_id': "8086", 'vendor_id': "000[1-2]" }, {} }
  • on controller
  pci_flavor_attrs = ['device_id', 'vendor_id']

the compute node would report PCI stats group by ('device_id', 'vendor_id'). pci stats will report 2 pool:

{'device_id':'0001', 'vendor_id':'8086', 'count': 1 }

{'device_id':'0002', 'vendor_id':'8086', 'count': 1 }

  • create PCI flavor

nova pci-flavor-create name 'bigGPU' description 'passthrough Intel's on-die GPU' nova pci-flavor-update name 'bigGPU' set 'vendor_id'='8086' 'product_id': '0001'

nova pci-flavor-create name 'bigGPU' description 'passthrough Intel's on-die GPU' nova pci-flavor-update name 'bigGPU2' set 'vendor_id'='8086' 'product_id': '0002'

  • create flavor and boot with it

nova flavor-key m1.small set pci_passthrough:pci_flavor= '1:bigGPU,bigGPU2;' nova boot mytest --flavor m1.tiny --image=cirros-0.3.1-x86_64-uec

General PCI pass through wild-cast PCI flavor

given compute nodes contain 2 type GPU with , vendor:device 8086:0001, or vendor:device 8086:0002

  • on the compute nodes, config the pci_information
   pci_information =  { { 'device_id': "8086", 'vendor_id': "000[1-2]" }, {} }
  • on controller
  pci_flavor_attrs = ['device_id', 'vendor_id']

the compute node would report PCI stats group by ('device_id', 'vendor_id'). pci stats will report 2 pool:

{'device_id':'0001', 'vendor_id':'8086', 'count': 1 }

{'device_id':'0002', 'vendor_id':'8086', 'count': 1 }

  • create PCI flavor

nova pci-flavor-create name 'bigGPU' description 'passthrough Intel's on-die GPU' nova pci-flavor-update name 'bigGPU' set 'vendor_id'='8086' 'product_id': '000[1-2]'

  • create flavor and boot with it

nova flavor-key m1.small set pci_passthrough:pci_flavor= '1:bigGPU;' nova boot mytest --flavor m1.tiny --image=cirros-0.3.1-x86_64-uec

PCI pass through support grouping tag

given compute nodes contain 2 type GPU with , vendor:device 8086:0001, or vendor:device 8086:0002

  • on the compute nodes, config the pci_information
   pci_information =  { { 'device_id': "8086", 'vendor_id': "000[1-2]" }, { 'e.group' => 'gpu' } }
  • on controller
  pci_flavor_attrs = ['e.group']

the compute node would report PCI stats group by ('e.group'). pci stats will report 1 pool:

{'e.group':'gpu', 'count': 2 }


  • create PCI flavor

nova pci-flavor-create name 'bigGPU' description 'passthrough Intel's on-die GPU' nova pci-flavor-update name 'bigGPU' set 'e.group'='gpu'

  • create flavor and boot with it

nova flavor-key m1.small set pci_passthrough:pci_flavor= '1:bigGPU;' nova boot mytest --flavor m1.tiny --image=cirros-0.3.1-x86_64-uec


PCI SRIOV with tagged flavor

given compute nodes contain 5 PCI NIC , vendor:device 8086:0022, and it connect to physical network "X".

  • on the compute nodes, config the pci_information
   pci_information =  { { 'device_id': "8086", 'vendor_id': "000[1-2]" }, { 'e.physical_netowrk' => 'X' } }
  • on controller
  pci_flavor_attrs = ['e.physical_netowrk']

the compute node would report PCI stats group by ('e.group'). pci stats will report 1 pool:

{'e.physical_netowrk':'X', 'count': 1 }


  • create PCI flavor

nova pci-flavor-create name 'phyX_NIC' description 'passthrough NIC connect to physical network X' nova pci-flavor-update name 'bigGPU' set 'e.physical_netowrk'='X'

  • create flavor and boot with it

nova boot mytest --flavor m1.tiny --image=cirros-0.3.1-x86_64-uec --nic net-id=network_X pci_flavor= '1:phyX_NIC;'

transite config file to API

  1. the config file for alias and whitelist definition is going to deprecated.
  2. new config pci_information will replace whitelist
  3. pci flavor will replace alias
  4. *white list/alias schema still work
    * And also  given a deprecated notice, alias will fade out  which will be remove start from next release.

with this solution, we move PCI flavor from config file to API.

DB for pci configration

each pci flavor will be a set of (k,v), and the pci flavor don't need to contain same k, v pair. another problem this define try to slove is, i,.e SRIOV also want feature autodiscovery(under discuss), with this, the flavor might need a 'feature' key to be added if not store it as (k,v) pair. the (k,v) paire define let more extra infomation can be store in the pci device.

  talbe: pci_flavor{
               id   :  data base of this k,v pair
               UUID:  which pci-flavor the  k,v belong to
               key 
               value (might be a simple value or reduce Regular express)
           }


Requirements from SRIOV

  • group device
  for SRIOV, all VFs belong to same PF share same physical network reachability. so if you want, say, deploy a vlan network, you need choose the right PF's VF, otherwise network does not work for you.  the pci flavor does this work well.
  • mark the device alloced to the the flavor
  networking or other special deive is not as simple as pass though to the VM, there is need more configration. to acheive this, SRIOV must know the device infomation allocation to the specific flavor.

Implement the grouping

concept introduce here: spec: a filter defined by (k,v) paris, which k in the pci object fileds, this means those (k,v) is the pci device property like: vendor_id, 'address', pci-type etc. extra_spec: the filter defined by (k, v) and k not in the pci object fileds.

pci utils/objects support grouping

      * pci utils k,v match support the address reduce regular expression
      * objects provide a class level extrac interface to extract base spec and extra spec

pci-flavor(white list) support address set

      * white list support 'address' reduce regular expresion compare.
      * white list support  any other (k,v) pair to group or store special infomation 
      * object extrac specs and extra_info, specs use as whitelist spec, extra info will be updated to device's extra_info fields

enable flavor support pci-flavor

      * pci-flavor's name set in the extra spec in the instance type 
      * pci manager use extrac method extrac the specs and extra_specs, match them agains  the pci object & object.extra_info.

pci stats grouping device base on pci flavor

        *  current gourping base on  [vendor_id, product_id, extra_info]
        *  going to use 'pci-flavor' grouping the device.
        * still keep compatible by default, via a new config option switch to new grouping policy.

Implement device mark from the pci-flavor

here is the idea how user can identify which device allocated for the pci-flavor.

    *while define the flavor, put a  marker(network uuid)  into the flavor then store in the device's extra_info fileds
    *after finished allocation, user can seach a instance's pci devices to find the specific device do further configration
    the way marker data transfer from user to device utilize the pci_request, which convert from the pci-flavor.