Jump to: navigation, search

Difference between revisions of "PCI passthrough SRIOV support Icehouse"

(Created page with "==== Background ==== This spec gives what we want to achieve in IceHouse release to support the SRIO-IOV NIC. Please refer to https://wiki.openstack.org/wiki/PCI_passthrough...")
 
 
(31 intermediate revisions by the same user not shown)
Line 1: Line 1:
==== Background ====
+
===Overall Idea===
 +
Cloud admin provides extra attributes for assignable PCI devices, user (including neutron and normal cloud user) can request PCI device with specified extra attributes, and nova scheduler can make decision based on these extra PCI device attributes.
  
This spec gives what we want to achieve in IceHouse release to support the SRIO-IOV NIC.
+
This is an important requirement for SR-IOV NIC support, and also will be helpful to other usage cases as well.
  
Please refer to https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support for detailed information.
+
=== Changes Plan===
  
===Use cases ===
 
  
==== General PCI pass through  ====
+
====PCI Infomation Config Item====
given compute nodes contain 1 GPU with vendor:device 8086:0001
+
*Description
 +
:A new configuration item added to extend current PCI white list configuration items to:
  
*on the compute nodes, config the pci_information
+
::Select assignable PCI devices based on reduced regular expression
    pci_information = { { 'device_id': "8086", 'vendor_id': "0001" }, {} }
+
::Configure additional arbitrary info for these devices, as any (k,v) pair, k and v is string.
  
* on controller
+
*Backward compatibility
  pci_flavor_attrs = ['device_id', 'vendor_id']
+
:For compute node w/o PCI information configuration item, the PCI white list will be used, as no additional info provided.
  pci_alias = {'vendor_id':'8086', 'product_id':'0001', 'name':bigGPU', description: 'passthrough Intel's on-die GPU'}
+
:For compute node w/ both PCI information and PCI white list configuration items, the PCI information will overwrite the PCI whitelist.
  
the compute node would report PCI stats group by ('device_id', 'vendor_id').
+
*Long Term Change
pci stats will report one pool:
+
:No. This item will be same for long term design.
  {'device_id':'0001', 'vendor_id':'8086', 'count': 1 }
 
  
* create flavor and boot with it
+
*Example
 +
:pci_information =  { { 'vendor_id': "8086", 'device_id': "000[1-2]" }, { 'e.physical_netowrk': 'X' } }
  
  nova flavor-key m1.small set pci_passthrough:pci_flavor= 1:bigGPU
+
:This configuration specified device with vendor_id as 0x8086, and device_id as 0x0001 or 0x0002 as assignable device. And these devices have additional information of 'e.physical_network' as 'X', meaning that the physical network connected to these devices are 'X'.
  nova boot  mytest  --flavor m1.tiny  --image=cirros-0.3.1-x86_64-uec
 
  
==== General PCI pass through with multi PCI flavor candidate ====
+
====Pci_flavor_attrs Config Item====
  
given compute nodes contain 2 type GPU with , vendor:device 8086:0001, or vendor:device 8086:0002
+
*Description
 +
:Specify the PCI information and extra information that can be used to express PCI device requirement and that can be used by PCI scheduler to make decision.
  
*on the compute nodes, config the pci_information
+
*Backward Compatibility
    pci_information =  { { 'device_id': "8086", 'vendor_id': "000[1-2]" }, {} }
+
:If not specified, it's vendor_id/device_id/extra_info as currently implemented in pci/pci_stats.py.
  
* on controller
+
*Long Term Change
  pci_flavor_attrs = ['device_id', 'vendor_id']
+
:In I release, the pci_flavor_attrs is defined in both compute nodes and controller nodes. After I release, it will be defined in controller nodes (the scheduler node) only, and compute nodes get such information from controller nodes.
  pci_alias = [{'vendor_id':'8086', 'product_id':'0001', 'name':bigGPU', description: 'Intel's on-die GPU'},  {'vendor_id':'8086', 'product_id':'0002', 'name':bigGPU2', description: ' New Intel's on-die GPU'}]
 
  
the compute node would report PCI stats group by ('device_id', 'vendor_id').
+
:As the controller nodes are always updated before compute node, there will be no update issue.
pci stats will report 2 pool:
 
  
  {'device_id':'0001', 'vendor_id':'8086', 'count': 1 }
+
*Example
  {'device_id':'0002', 'vendor_id':'8086', 'count': 1 }
+
:pci_flavor_attrs=[product_id, vendor_id, e.physical_network]
  
 +
====Extending PCI Stats====
  
* create flavor and boot with it
+
*Description
  nova flavor-key m1.small set pci_passthrough:pci_flavor= '1:bigGPU,1:bigGPU2;'
+
:Current PCI stats group devices only on  [vendor_id, product_id, extra_info]. This will be extended to group by keys specified in pci_flavor_attrs.
  nova boot  mytest  --flavor m1.tiny  --image=cirros-0.3.1-x86_64-uec
 
  
==== General PCI pass through wild-cast PCI flavor ====
+
*Backward compatibility
 +
:The PCI stats are populated by compute node into DB and then utilized by Nova Scheduler. During live update, there will be compute node still populate based on old PCI stats configuration.
  
given compute nodes contain 2 type GPUs with vendor:device 8086:0001, or vendor:device 8086:0002
+
:But this should be harmless since:
 +
::a) If compute node provides more information than pci_flavor_attrs required, scheduler will not use that PCI information and the schedule decision is correct still.
 +
::b) If compute node provides less information than pci_flavor_attrs required, the scheduler will treat the value of the information as None and the result is some PCI requirement may fail while there are host can meet the requirement.  But it is transient and have no correctness issue.
  
*on the compute nodes, config the pci_information
+
*Long Term Change
    pci_information =  { { 'device_id': "8086", 'vendor_id': "000[1-2]" }, {} }
+
:No. This is same as long term changes.
  
* on controller
+
*Example
  pci_flavor_attrs = ['device_id', 'vendor_id']
+
:N/A
  pci_alias = [{'vendor_id':'8086', 'product_id':'000[1-2]', 'name':bigGPU', description: 'Intel's on-die GPU'}]
 
  
the compute node would report PCI stats group by ('device_id', 'vendor_id').
+
====Extending PCI Alias====
pci stats will report 2 pool:
+
*Description
 +
: Currently PCI alias support PCI requirement with only vendor_id/device_id as keys. Now, as cloud admin can specify additional device information, and user can requires PCI devices with specified additional information, the PCI alias should be extended to support keys defined in pci_flavor_attrs.
  
  {'device_id':'0001', 'vendor_id':'8086', 'count': 1 }
+
*Backward Compatibility
  {'device_id':'0002', 'vendor_id':'8086', 'count': 1 }
+
:No.
  
 +
:The PCI alias is translated into PCI request when VM launch and not referred anymore. When new alias is defined at controller node at upgrade time, with keys from pci_flavor_attrs, new instanced will use the new alias. Old instances are safe as the PCI alias is not referred anymore.
  
* create flavor and boot with it
+
*Long Term Change
 +
:In long term, the PCI alias will be replaced by PCI flavor, which will be created by API.
  
  nova flavor-key m1.small set pci_passthrough:pci_flavor= '1:bigGPU;'
+
:I think there will be upgrade issue in such situation.
  nova boot  mytest  --flavor m1.tiny  --image=cirros-0.3.1-x86_64-uec
 
  
====  PCI pass through support grouping tag ====
+
*Example:
 +
:pci_alias = [{'e.physical_netowrk':'X', 'vendor_id': '8086', 'name':'intel_nic_x_net', description: 'Intel NIC connected to physical network X'}]
  
given compute nodes contain 2 type GPU with , vendor:device 8086:0001, or vendor:device 8086:0002
 
  
*on the compute nodes, config the pci_information
+
=== Usage Cases ===
    pci_information = { { 'device_id': "8086", 'vendor_id': "000[1-2]" }, { 'e.group':'gpu' } }
 
  
* on controller
+
==== Graphics Assignment ====
  pci_flavor_attrs = ['e.group']
+
*Description
  pci_alias = [{'e.group':'gpu', 'name':bigGPU', description: 'Intel's on-die GPU'}]
+
:Considering different GPUs in cloud system, some support DirectX 11 and some support DirectX 10 only. Cloud vendor will charge different price for different GPU capability.
  
the compute node would report PCI stats group by ('e.group').
+
:A cloud user wants to create 2 VMs utilizing the GPU card. The application in one VM need only DirectX 10 and in another VM need DirectX 11. This requirement can't be achieved in H release implementation but can be achieved through this enhancement.
pci stats will report 1 pool:
 
{'e.group':'gpu', 'count': 2 }
 
  
* create flavor and boot with it
+
*Steps
 +
:Cloud admin defines extra information for the GPU cards, eg, e.highest_directx_version = '10' or e.highest_directx_version = '11' .
  
  nova flavor-key m1.small set pci_passthrough:pci_flavor= '1:bigGPU;'
+
:Cloud admin then puts 'e.hightest_directx_version' in the pci_flavor_attrs.
  nova boot  mytest  --flavor m1.tiny  --image=cirros-0.3.1-x86_64-uec
 
  
====  PCI SRIOV with tagged flavor ====
+
:Cloud admin defines pci_alias as :
given compute nodes contain 5 PCI NIC , vendor:device 8086:0022, and it connect to physical network "X".
 
  
*on the compute nodes, config the pci_information
+
::pci_alias = [{'e.hightest_directx_version':'10', 'vendor_id': '8086', 'name':'intel_gfx_dirx_10', description: 'Intel graphics card support DirectX 10 at most'}]
  
    pci_information = { { 'device_id': "8086", 'vendor_id': "000[1-2]" }, { 'e.physical_netowrk': 'X' } }
+
::pci_alias = [{'e.hightest_directx_version':'11', 'vendor_id': '8086', 'name':'intel_gfx_dirx_11', description: 'Intel graphics card support DirectX 11 at most'}]
  
* on controller
+
:Cloud admin defines instance flavors as:
 +
::nova flavor-key m1.small set pci_passthrough:pci_alias= 1:intel_gfx_dirx_10
 +
::nova flavor-key m1.big set pci_passthrough:pci_alias= 1:intel_gfx_dirx_11
  
  pci_flavor_attrs = ['e.physical_netowrk']
+
:Cloud user creates instances as:
  pci_alias = [{'e.physical_netowrk':'X', 'name':phyX_NIC', description: 'NIC connect to physical network X'}]
+
::nova boot  direct10_app  --flavor m1.small  --image=cirros-0.3.1-x86_64-uec
 +
::nova boot  direct11_app  --flavor m1.big  --image=cirros-0.3.1-x86_64-uec
  
the compute node would report PCI stats group by ('e.group').
 
pci stats will report 1 pool:
 
  
  {'e.physical_netowrk':'X', 'count': 1 }
+
====PCI SR-IOV NIC====
 
+
:See https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov for detailed usage.
 
 
* create flavor and boot with it
 
  nova boot  mytest  --flavor m1.tiny  --image=cirros-0.3.1-x86_64-uec  --nic  net-id=network_X  pci_flavor= 'phyX_NIC:1'
 

Latest revision as of 02:52, 4 February 2014

Overall Idea

Cloud admin provides extra attributes for assignable PCI devices, user (including neutron and normal cloud user) can request PCI device with specified extra attributes, and nova scheduler can make decision based on these extra PCI device attributes.

This is an important requirement for SR-IOV NIC support, and also will be helpful to other usage cases as well.

Changes Plan

PCI Infomation Config Item

  • Description
A new configuration item added to extend current PCI white list configuration items to:
Select assignable PCI devices based on reduced regular expression
Configure additional arbitrary info for these devices, as any (k,v) pair, k and v is string.
  • Backward compatibility
For compute node w/o PCI information configuration item, the PCI white list will be used, as no additional info provided.
For compute node w/ both PCI information and PCI white list configuration items, the PCI information will overwrite the PCI whitelist.
  • Long Term Change
No. This item will be same for long term design.
  • Example
pci_information = { { 'vendor_id': "8086", 'device_id': "000[1-2]" }, { 'e.physical_netowrk': 'X' } }
This configuration specified device with vendor_id as 0x8086, and device_id as 0x0001 or 0x0002 as assignable device. And these devices have additional information of 'e.physical_network' as 'X', meaning that the physical network connected to these devices are 'X'.

Pci_flavor_attrs Config Item

  • Description
Specify the PCI information and extra information that can be used to express PCI device requirement and that can be used by PCI scheduler to make decision.
  • Backward Compatibility
If not specified, it's vendor_id/device_id/extra_info as currently implemented in pci/pci_stats.py.
  • Long Term Change
In I release, the pci_flavor_attrs is defined in both compute nodes and controller nodes. After I release, it will be defined in controller nodes (the scheduler node) only, and compute nodes get such information from controller nodes.
As the controller nodes are always updated before compute node, there will be no update issue.
  • Example
pci_flavor_attrs=[product_id, vendor_id, e.physical_network]

Extending PCI Stats

  • Description
Current PCI stats group devices only on [vendor_id, product_id, extra_info]. This will be extended to group by keys specified in pci_flavor_attrs.
  • Backward compatibility
The PCI stats are populated by compute node into DB and then utilized by Nova Scheduler. During live update, there will be compute node still populate based on old PCI stats configuration.
But this should be harmless since:
a) If compute node provides more information than pci_flavor_attrs required, scheduler will not use that PCI information and the schedule decision is correct still.
b) If compute node provides less information than pci_flavor_attrs required, the scheduler will treat the value of the information as None and the result is some PCI requirement may fail while there are host can meet the requirement. But it is transient and have no correctness issue.
  • Long Term Change
No. This is same as long term changes.
  • Example
N/A

Extending PCI Alias

  • Description
Currently PCI alias support PCI requirement with only vendor_id/device_id as keys. Now, as cloud admin can specify additional device information, and user can requires PCI devices with specified additional information, the PCI alias should be extended to support keys defined in pci_flavor_attrs.
  • Backward Compatibility
No.
The PCI alias is translated into PCI request when VM launch and not referred anymore. When new alias is defined at controller node at upgrade time, with keys from pci_flavor_attrs, new instanced will use the new alias. Old instances are safe as the PCI alias is not referred anymore.
  • Long Term Change
In long term, the PCI alias will be replaced by PCI flavor, which will be created by API.
I think there will be upgrade issue in such situation.
  • Example:
pci_alias = [{'e.physical_netowrk':'X', 'vendor_id': '8086', 'name':'intel_nic_x_net', description: 'Intel NIC connected to physical network X'}]


Usage Cases

Graphics Assignment

  • Description
Considering different GPUs in cloud system, some support DirectX 11 and some support DirectX 10 only. Cloud vendor will charge different price for different GPU capability.
A cloud user wants to create 2 VMs utilizing the GPU card. The application in one VM need only DirectX 10 and in another VM need DirectX 11. This requirement can't be achieved in H release implementation but can be achieved through this enhancement.
  • Steps
Cloud admin defines extra information for the GPU cards, eg, e.highest_directx_version = '10' or e.highest_directx_version = '11' .
Cloud admin then puts 'e.hightest_directx_version' in the pci_flavor_attrs.
Cloud admin defines pci_alias as :
pci_alias = [{'e.hightest_directx_version':'10', 'vendor_id': '8086', 'name':'intel_gfx_dirx_10', description: 'Intel graphics card support DirectX 10 at most'}]
pci_alias = [{'e.hightest_directx_version':'11', 'vendor_id': '8086', 'name':'intel_gfx_dirx_11', description: 'Intel graphics card support DirectX 11 at most'}]
Cloud admin defines instance flavors as:
nova flavor-key m1.small set pci_passthrough:pci_alias= 1:intel_gfx_dirx_10
nova flavor-key m1.big set pci_passthrough:pci_alias= 1:intel_gfx_dirx_11
Cloud user creates instances as:
nova boot direct10_app --flavor m1.small --image=cirros-0.3.1-x86_64-uec
nova boot direct11_app --flavor m1.big --image=cirros-0.3.1-x86_64-uec


PCI SR-IOV NIC

See https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov for detailed usage.