Jump to: navigation, search

Difference between revisions of "PCI passthrough SRIOV support Icehouse"

(Extending PCI Alias)
Line 82: Line 82:
 
*Example:
 
*Example:
 
:pci_alias = [{'e.physical_netowrk':'X', 'vendor_id': '8086', 'name':'intel_nic_x_net', description: 'Intel NIC connected to physical network X'}]
 
:pci_alias = [{'e.physical_netowrk':'X', 'vendor_id': '8086', 'name':'intel_nic_x_net', description: 'Intel NIC connected to physical network X'}]
 
====Support Other PCI Request Information====
 
:Currently PCI request is passed to Nova through instance flavor. This will be extended to passing through network parameter.
 
 
===Use cases ===
 
 
==== General PCI pass through  ====
 
given compute nodes contain 1 GPU with vendor:device 8086:0001
 
 
*on the compute nodes, config the pci_information
 
    pci_information =  { { 'device_id': "8086", 'vendor_id': "0001" }, {} }
 
 
* on controller
 
  pci_flavor_attrs = ['device_id', 'vendor_id']
 
  pci_alias = {'vendor_id':'8086', 'product_id':'0001', 'name':bigGPU', description: 'passthrough Intel's on-die GPU'}
 
 
the compute node would report PCI stats group by ('device_id', 'vendor_id').
 
pci stats will report one pool:
 
  {'device_id':'0001', 'vendor_id':'8086', 'count': 1 }
 
 
* create flavor and boot with it
 
 
  nova flavor-key m1.small set pci_passthrough:pci_flavor= 1:bigGPU
 
  nova boot  mytest  --flavor m1.tiny  --image=cirros-0.3.1-x86_64-uec
 
 
==== General PCI pass through with multi PCI flavor candidate ====
 
 
given compute nodes contain 2 type GPU with , vendor:device 8086:0001, or vendor:device 8086:0002
 
 
*on the compute nodes, config the pci_information
 
    pci_information =  { { 'device_id': "8086", 'vendor_id': "000[1-2]" }, {} }
 
 
* on controller
 
  pci_flavor_attrs = ['device_id', 'vendor_id']
 
  pci_alias = [{'vendor_id':'8086', 'product_id':'0001', 'name':bigGPU', description: 'Intel's on-die GPU'},  {'vendor_id':'8086', 'product_id':'0002', 'name':bigGPU2', description: ' New Intel's on-die GPU'}]
 
 
the compute node would report PCI stats group by ('device_id', 'vendor_id').
 
pci stats will report 2 pool:
 
 
  {'device_id':'0001', 'vendor_id':'8086', 'count': 1 }
 
  {'device_id':'0002', 'vendor_id':'8086', 'count': 1 }
 
 
 
* create flavor and boot with it
 
  nova flavor-key m1.small set pci_passthrough:pci_flavor= '1:bigGPU,1:bigGPU2;'
 
  nova boot  mytest  --flavor m1.tiny  --image=cirros-0.3.1-x86_64-uec
 
 
==== General PCI pass through wild-cast PCI flavor ====
 
 
given compute nodes contain 2 type GPUs with vendor:device 8086:0001, or vendor:device 8086:0002
 
 
*on the compute nodes, config the pci_information
 
    pci_information =  { { 'device_id': "8086", 'vendor_id': "000[1-2]" }, {} }
 
 
* on controller
 
  pci_flavor_attrs = ['device_id', 'vendor_id']
 
  pci_alias = [{'vendor_id':'8086', 'product_id':'000[1-2]', 'name':bigGPU', description: 'Intel's on-die GPU'}]
 
 
the compute node would report PCI stats group by ('device_id', 'vendor_id').
 
pci stats will report 2 pool:
 
 
  {'device_id':'0001', 'vendor_id':'8086', 'count': 1 }
 
  {'device_id':'0002', 'vendor_id':'8086', 'count': 1 }
 
 
 
* create flavor and boot with it
 
 
  nova flavor-key m1.small set pci_passthrough:pci_flavor= '1:bigGPU;'
 
  nova boot  mytest  --flavor m1.tiny  --image=cirros-0.3.1-x86_64-uec
 
 
====  PCI pass through support grouping tag ====
 
 
given compute nodes contain 2 type GPU with , vendor:device 8086:0001, or vendor:device 8086:0002
 
 
*on the compute nodes, config the pci_information
 
    pci_information =  { { 'device_id': "8086", 'vendor_id': "000[1-2]" }, { 'e.group':'gpu' } }
 
 
* on controller
 
  pci_flavor_attrs = ['e.group']
 
  pci_alias = [{'e.group':'gpu', 'name':bigGPU', description: 'Intel's on-die GPU'}]
 
 
the compute node would report PCI stats group by ('e.group').
 
pci stats will report 1 pool:
 
{'e.group':'gpu', 'count': 2 }
 
 
* create flavor and boot with it
 
 
  nova flavor-key m1.small set pci_passthrough:pci_flavor= '1:bigGPU;'
 
  nova boot  mytest  --flavor m1.tiny  --image=cirros-0.3.1-x86_64-uec
 
 
====  PCI SRIOV with tagged flavor ====
 
given compute nodes contain 5 PCI NIC , vendor:device 8086:0022, and it connect to physical network "X".
 
 
*on the compute nodes, config the pci_information
 
 
    pci_information =  { { 'device_id': "8086", 'vendor_id': "000[1-2]" }, { 'e.physical_netowrk': 'X' } }
 
 
* on controller
 
 
  pci_flavor_attrs = ['e.physical_netowrk']
 
  pci_alias = [{'e.physical_netowrk':'X', 'name':phyX_NIC', description: 'NIC connect to physical network X'}]
 
 
the compute node would report PCI stats group by ('e.group').
 
pci stats will report 1 pool:
 
 
  {'e.physical_netowrk':'X', 'count': 1 }
 
 
 
* create flavor and boot with it
 
  nova boot  mytest  --flavor m1.tiny  --image=cirros-0.3.1-x86_64-uec  --nic  net-id=network_X  pci_flavor= 'phyX_NIC:1'
 

Revision as of 02:11, 4 February 2014

Background

This document describes the generic PCI pass-through enhancement that we want to achieve in IceHouse release, to support the SRIO-IOV NIC pass-through.

The corresponding long-term design documents is at https://docs.google.com/document/d/1vadqmurlnlvZ5bv3BlUbFeXRS_wh-dsgi5plSjimWjU/edit?pli=1# . Because of tight schedule and still some disagreement left, we will only implement part of the design document in I release.

Please refer to https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support for final changes plan.

Overall Idea

Cloud admin provides extra attributes for assignable PCI devices, user (including neutron and normal cloud user) can request PCI device with specified extra attributes, and nova scheduler can make decision based on these extra PCI device attributes.

Changes Plan

PCI Infomation Config Item

  • Description
A new configuration item added to extend current PCI white list configuration items to:
Select assignable PCI devices based on reduced regular expression
Configure additional arbitrary info for these devices, as any (k,v) pair, k and v is string.
  • Backward compatibility
For compute node w/o PCI information configuration item, the PCI white list will be used, as no additional info provided.
For compute node w/ both PCI information and PCI white list configuration items, the PCI information will overwrite the PCI whitelist.
  • Long Term Change
No. This item will be same for long term design.
  • Example
pci_information = { { 'vendor_id': "8086", 'device_id': "000[1-2]" }, { 'e.physical_netowrk': 'X' } }
This configuration specified device with vendor_id as 0x8086, and device_id as 0x0001 or 0x0002 as assignable device. And these devices have additional information of 'e.physical_network' as 'X', meaning that the physical network connected to these devices are 'X'.

Pci_flavor_attrs Config Item

  • Description
Specify the PCI information and extra information that can be used to express PCI device requirement and that can be used by PCI scheduler to make decision.
  • Backward Compatibility
If not specified, it's vendor_id/device_id/extra_info as currently implemented in pci/pci_stats.py.
  • Long Term Change
In I release, the pci_flavor_attrs is defined in both compute nodes and controller nodes. After I release, it will be defined in controller nodes (the scheduler node) only, and compute nodes get such information from controller nodes.
As the controller nodes are always updated before compute node, there will be no update issue.
  • Example
pci_flavor_attrs=[product_id, vendor_id, e.physical_network]

Extending PCI Stats

  • Description
Current PCI stats group devices only on [vendor_id, product_id, extra_info]. This will be extended to group by keys specified in pci_flavor_attrs.
  • Backward compatibility
The PCI stats are populated by compute node into DB and then utilized by Nova Scheduler. During live update, there will be compute node still populate based on old PCI stats configuration.
But this should be harmless since:
a) If compute node provides more information than pci_flavor_attrs required, scheduler will not use that PCI information and the schedule decision is correct still.
b) If compute node provides less information than pci_flavor_attrs required, the scheduler will treat the value of the information as None and the result is some PCI requirement may fail while there are host can meet the requirement. But it is transient and have no correctness issue.
  • Long Term Change
No. This is same as long term changes.
  • Example
N/A

Extending PCI Alias

  • Description
Currently PCI alias support PCI requirement with only vendor_id/device_id as keys. Now, as cloud admin can specify additional device information, and user can requires PCI devices with specified additional information, the PCI alias should be extended to support keys defined in pci_flavor_attrs.
  • Backward Compatibility
No.
The PCI alias is translated into PCI request when VM launch and not referred anymore. When new alias is defined at controller node at upgrade time, with keys from pci_flavor_attrs, new instanced will use the new alias. Old instances are safe as the PCI alias is not referred anymore.
  • Long Term Change
In long term, the PCI alias will be replaced by PCI flavor, which will be created by API.
I think there will be upgrade issue in such situation.
  • Example:
pci_alias = [{'e.physical_netowrk':'X', 'vendor_id': '8086', 'name':'intel_nic_x_net', description: 'Intel NIC connected to physical network X'}]