PCI passthrough SRIOV support Icehouse

Overall Idea
Cloud admin provides extra attributes for assignable PCI devices, user (including neutron and normal cloud user) can request PCI device with specified extra attributes, and nova scheduler can make decision based on these extra PCI device attributes.

This is an important requirement for SR-IOV NIC support, and also will be helpful to other usage cases as well.

PCI Infomation Config Item

 * Description
 * A new configuration item added to extend current PCI white list configuration items to:


 * Select assignable PCI devices based on reduced regular expression
 * Configure additional arbitrary info for these devices, as any (k,v) pair, k and v is string.


 * Backward compatibility
 * For compute node w/o PCI information configuration item, the PCI white list will be used, as no additional info provided.
 * For compute node w/ both PCI information and PCI white list configuration items, the PCI information will overwrite the PCI whitelist.


 * Long Term Change
 * No. This item will be same for long term design.


 * Example
 * pci_information = { { 'vendor_id': "8086", 'device_id': "000[1-2]" }, { 'e.physical_netowrk': 'X' } }


 * This configuration specified device with vendor_id as 0x8086, and device_id as 0x0001 or 0x0002 as assignable device. And these devices have additional information of 'e.physical_network' as 'X', meaning that the physical network connected to these devices are 'X'.

Pci_flavor_attrs Config Item

 * Description
 * Specify the PCI information and extra information that can be used to express PCI device requirement and that can be used by PCI scheduler to make decision.


 * Backward Compatibility
 * If not specified, it's vendor_id/device_id/extra_info as currently implemented in pci/pci_stats.py.


 * Long Term Change
 * In I release, the pci_flavor_attrs is defined in both compute nodes and controller nodes. After I release, it will be defined in controller nodes (the scheduler node) only, and compute nodes get such information from controller nodes.


 * As the controller nodes are always updated before compute node, there will be no update issue.


 * Example
 * pci_flavor_attrs=[product_id, vendor_id, e.physical_network]

Extending PCI Stats

 * Description
 * Current PCI stats group devices only on [vendor_id, product_id, extra_info]. This will be extended to group by keys specified in pci_flavor_attrs.


 * Backward compatibility
 * The PCI stats are populated by compute node into DB and then utilized by Nova Scheduler. During live update, there will be compute node still populate based on old PCI stats configuration.


 * But this should be harmless since:
 * a) If compute node provides more information than pci_flavor_attrs required, scheduler will not use that PCI information and the schedule decision is correct still.
 * b) If compute node provides less information than pci_flavor_attrs required, the scheduler will treat the value of the information as None and the result is some PCI requirement may fail while there are host can meet the requirement. But it is transient and have no correctness issue.


 * Long Term Change
 * No. This is same as long term changes.


 * Example
 * N/A

Extending PCI Alias

 * Description
 * Currently PCI alias support PCI requirement with only vendor_id/device_id as keys. Now, as cloud admin can specify additional device information, and user can requires PCI devices with specified additional information, the PCI alias should be extended to support keys defined in pci_flavor_attrs.


 * Backward Compatibility
 * No.


 * The PCI alias is translated into PCI request when VM launch and not referred anymore. When new alias is defined at controller node at upgrade time, with keys from pci_flavor_attrs, new instanced will use the new alias. Old instances are safe as the PCI alias is not referred anymore.


 * Long Term Change
 * In long term, the PCI alias will be replaced by PCI flavor, which will be created by API.


 * I think there will be upgrade issue in such situation.


 * Example:
 * pci_alias = [{'e.physical_netowrk':'X', 'vendor_id': '8086', 'name':'intel_nic_x_net', description: 'Intel NIC connected to physical network X'}]

Graphics Assignment

 * Description
 * Considering different GPUs in cloud system, some support DirectX 11 and some support DirectX 10 only. Cloud vendor will charge different price for different GPU capability.


 * A cloud user wants to create 2 VMs utilizing the GPU card. The application in one VM need only DirectX 10 and in another VM need DirectX 11. This requirement can't be achieved in H release implementation but can be achieved through this enhancement.


 * Steps
 * Cloud admin defines extra information for the GPU cards, eg, e.highest_directx_version = '10' or e.highest_directx_version = '11'.


 * Cloud admin then puts 'e.hightest_directx_version' in the pci_flavor_attrs.


 * Cloud admin defines pci_alias as :


 * pci_alias = [{'e.hightest_directx_version':'10', 'vendor_id': '8086', 'name':'intel_gfx_dirx_10', description: 'Intel graphics card support DirectX 10 at most'}]


 * pci_alias = [{'e.hightest_directx_version':'11', 'vendor_id': '8086', 'name':'intel_gfx_dirx_11', description: 'Intel graphics card support DirectX 11 at most'}]


 * Cloud admin defines instance flavors as:
 * nova flavor-key m1.small set pci_passthrough:pci_alias= 1:intel_gfx_dirx_10
 * nova flavor-key m1.big set pci_passthrough:pci_alias= 1:intel_gfx_dirx_11


 * Cloud user creates instances as:
 * nova boot direct10_app  --flavor m1.small  --image=cirros-0.3.1-x86_64-uec
 * nova boot direct11_app  --flavor m1.big  --image=cirros-0.3.1-x86_64-uec

PCI SR-IOV NIC

 * See https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov for detailed usage.