Jump to: navigation, search

Difference between revisions of "PCI configration Database and API"

(admin create global groups)
 
(28 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
+
merge back to:
===API for whitelist===
+
https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support
not only the whitelist, the alias API also meaningful for administrator's pespective.  the horizon could utilized the alias db to find what kind of device is avaliable to the tenants, and these configration more reasonable in form of API.
 
 
 
white list/ alias on demand is all some kind of (k, v) pairs, PCI API can exploit  this to achive high flexibility.
 
 
 
and the group key configration is kind of policy for PCI and won't change rapidly, it's hould stay in the config file.
 
 
 
===Use cases ===
 
 
 
====admin inspect PCI devices (discuss)====
 
admin might want to know if there are some pci device avaliable to use, eventhough admin should know such infomation but if the inspect works, it's convenience. there is 2 methods to implemt inspect.
 
#  via database
 
  currently pci device in the  databse is after filter, if want inpect the deive on node, we should let all device going in to database. so problem is  db will too large then eventually slow down query, became a scale problem.  another problem is  whitelist take effective the libvirt layer not in compute node (pci traker), this becuase same thing, reduce un-necessary infomation.
 
 
#  via RPC call
 
  for inspect one node, use RPC call to get reulst from a node, show it to admin.  this need investigate more.
 
 
 
Review comments:
 
* I think we need a REST API to list the PCI devices present on a host
 
* That API could also show the groups
 
* Also maybe it could say which are in use (as currently reported to the scheduler)
 
* I think we should list them here: v2/​{tenant_id}​/os-hosts/​{host_name}​/os-pci-devices (or similar)
 
* See http://api.openstack.org/api-ref-compute.html and look for os-hosts
 
 
 
====admin  create global groups ====
 
suppose data center have some identical hosts with same configration, so the address for same device will have same address, then amdin want group the address '0000:01:00:7' and '0000:01:01:7' to a group say 'QucikAssist'
 
 
 
than admin can create the white list in one cmd:
 
    nova pci-config create white-list  set description='xx'  address='0000:01:00:7','0000:01:01:7'  group=''QuickAssist'
 
the API call then will be:
 
    create/v2/​{tenant_id}​/os-pci-config/white-list
 
with data:
 
    { 'group':'QuickAssist',  'address': ['0000:01:01:7', '0000:01:00:7'], 'description': "xxx"}
 
 
 
Review comments:
 
* I would go for: create/v2/​{tenant_id}​/os-pci-device-flavor
 
* please add a uuid for the group/flavor
 
* why is there no vendor id here?
 
* please consider the APIs for listing the current white-lists
 
* we need all the CRUD operations really. Please see other APIs to copy the patterns we use there (http://api.openstack.org/api-ref-compute.html see host-aggregates)
 
* I think you might want to add key-value metadata to these groups, like host-aggregates, but maybe lets ignore that for now, it can be added later
 
 
 
====admin  create per host groups ====
 
say data center have a special powerfull machine with a powerfull PCI  'QucikAssist' card  located at  '0000:01:00:7', admin want group it's VF to 'PowerDecode'.
 
 
 
create the white list in one cmd:
 
    nova pci-config create white-list  compute-node-id=<hostid> description='xx'  address= '0000:01:00:7'  group=''PowerDecode'
 
the API call then will be :
 
    create/v2/​{tenant_id}​/os-pci-config/white-list/
 
with data:
 
    { 'group':'PowerDecode',  'address':'0000:01:00:7',  'description': "xxx"  , 'compute-node-id':'hostid'}
 
 
 
Review comments:
 
* this feels wrong
 
* please just use host aggregates with a special metadata key that links back to the above pci-groups (via uuid) to associate a host, or set of hosts, with the PCI devices
 
 
 
====Check if white list work well ====
 
after about 30 seconds, white list take effect, so admin want to check is the white list work corretly, so use this:
 
( this API is covered by another bp(list it here for a full picture view): https://blueprints.launchpad.net/nova/+spec/pci-api-support )
 
 
 
nova pci-list node_id
 
 
 
====admin configures alias that request device from a specific group====
 
 
 
to allocate the device from a group, you need to create alias for it firstly, the flow is in same way as white list,except alias always global.
 
 
 
create the alias in one cmd:
 
    nova pci-config create alias  set name='GetMePowerfulldevice'  group=''PowerDecode'
 
the API call then will be :
 
  create/v2/​{tenant_id}​/os-pci-config/alias
 
with data:
 
    { 'group':'PowerDecode', 'name': "GetMePowerfulldevice"}
 
 
 
Review comments:
 
* this makes no sense to me
 
* if we use host aggregates for the host association, we should just be able to use the group/flavor name here right?
 
 
 
====admin configures flavours use above flavor ====
 
nova flavor-key m1.small set pci_passthrough:alias=GetMePowerfulldevice:2 
 
 
 
Review comments:
 
* again, this just becomes the pci-device-flavor
 
 
 
====admin boot VM with this  flavours ====
 
nova boot  mytest  --flavor m1.small  --image=cirros-0.3.1-x86_64-uec
 
 
 
Review:
 
* perfect, this is what I hoped for
 
 
 
====admin want to modify a white-list ====
 
 
 
if want to chang a white list, ie, want use just intel's accelerator, and remove a specify address from group:
 
 
 
# list avaliable global white list
 
    nova pci-config list white-list
 
 
 
API will be:
 
    get/v2/​{tenant_id}​/os-pci-config/white-list
 
    return global white-list list
 
 
 
# check the specify node's white-list:
 
    nova pci-config list white-list  node <node_id>
 
 
 
API will be:
 
    get/v2/​{tenant_id}​/os-pci-config/white-list/node/<node-id>
 
    return all white-list
 
 
 
# get detailed infomation about this whitelist:
 
    nova pci-config show white-list  <white-list UUID>
 
 
 
API will be:
 
    get/v2/​{tenant_id}​/os-pci-config/white-list/uuid/<UUID>
 
    return detailed white list dict.
 
 
 
# delete one key from a pci white list
 
    nova pci-config update  white-list <UUID>  unset  address= '0000:01:01:7'
 
 
 
API will be:
 
    update/v2/​{tenant_id}​/os-pci-config/white-list/uuid/<UUID>/unset
 
    with data { 'address':  '0000:01:01:7'}
 
 
 
# append or change one key's valule
 
    nova pci-config update  white-list  <UUID>  set vonder_id= '8086'
 
   
 
API will be:
 
    update/v2/​{tenant_id}​/os-pci-config/white-list/uuid/<UUID>
 
    with data { 'address':  '0000:01:01:7'}
 
 
 
Review comments:
 
* this is good information, but I don't like the flow or the API
 
* please rework after seeing my comments above
 
* I think this could be simpler and more consistent
 
 
 
====admin want to delete a whole white-list ====
 
 
 
delete white list by uuid:
 
    nova pci-config delete white-list  <UUID>
 
 
 
API will be:
 
    delete/v2/​{tenant_id}​/os-pci-config/white-list/uuid/<UUID>
 
 
 
Review comments:
 
* as above, please revise
 
 
 
====admin configures SRIOV====
 
 
 
Reviewer comments:
 
* this info is missing
 
* how does the admin associate PCI devices to neutron networks?
 
* I think the answer is that the pci-flavor has some metadata that tells you what neutron network it maps to, but its messy
 
 
 
====user requests SRIOV====
 
 
 
Reviewer:
 
* I will come back to review this...
 
 
 
 
 
* 1) setup a group for NIC allocation, the group can based on PF or address.
 
    nova pci-config create white-list  set "description"='xx'  "address"='0000:01:00:7','0000:01:01:7'  "group"='Intel.NIC.vlan'
 
 
 
* 2) config the pci manager to use the key 'group' to grouping the device
 
    add following configration to nova.conf
 
    pci_tracking_group_by = ['group']
 
 
 
if lack of this, the default grouping policy is use [vendor_id, product_id]
 
 
 
 
 
* 3) config the alias
 
    nova pci-config create alias  set "name"='vlan-SRIOV'  "group"='intel.NIC.vlan'
 
 
 
* 4)  config the flavor:  this is under discuss, neutron might want use --nic command line.
 
    nova flavor-key m1.small set "pci_passthrough:sriov:<nic1-tag>:alias"="vlan-SRIOV:2"
 
 
 
for sriov there should be a marker <nic1-tag>  for nic, the <nic1-tag> will be store in the device allocated to this nic.
 
after allocation finised on compute node, related network module can find the device via the <nic1-tag> to perferm interface configration.
 
 
 
* 5) admin boot VM with this flavours
 
    nova boot mytest --flavor m1.small --image=cirros-0.3.1-x86_64-uec
 
 
 
====transite config file to API ====
 
#  the config file for alias and whitelist defination is going to deprecated.
 
#  if database is not NULL , configration is ommit and given deprecated  warning.
 
#  if database is NULL, config if read from the file, also given a deprecated notice, which will be remove start from next release.
 
 
 
with this solution, we move pci config from file to API.
 
 
 
===DB for pci configration===
 
  talbe: pci_config{
 
                id  : data base of this k,v pair
 
                UUID: which alias/whitelist k,v belong to
 
                compute_node_id:  easy to query per host configration, NULL for global configration
 
                object : easy to query by object whitelist/alias
 
                key
 
                value
 
                value_type:  string, RE, list  # for further enhancement
 
            }
 
 
 
alias is always global.
 
 
 
===PCI releated Objects===
 
  Objects: white list/ alias/ group keys(on demand grop)
 
  objects property:  sets of k,v
 
   
 
===API interface===
 
 
 
*  get summary info mation
 
    GET/v2/​{tenant_id}​/os-pci-config/{config-object-name}
 
    show the global configraion summary.
 
    i.e {config-object-name} is pci_alias:
 
    [{"pci_alias":'nic-group', 'UUID":"alias-UUID',  'description':"blabla"}  ]
 
 
* get summary info mation on compute node
 
    GET/v2/​{tenant_id}​/os-pci-config/{config-object-name}/node/{compute-node-id'}
 
    show the per compute node  pci configraion summary:
 
    i.e {config-object-name} is pci_whitelist:
 
    [{"pci_whitelist":'NIC-2', 'UUID":"whitelist-UUID', 'description':"blabla" } ]
 
 
 
* get details infomation about the config
 
    GET/v2/​{tenant_id}​/os-pci-config/{config-object-name}/uuid/<UUID>
 
    i.e {config-object-name} is pci_whitelist:
 
      {'product_id':'8086', 'group':'nic-g1', 'compute-node-id':"3" ....}
 
 
 
* update a exist config
 
    UPDATE//v2/​{tenant_id}​/os-pci-config/{config-object-name}/<UUID>
 
    with data :
 
        {  'sriov-type':"macvtap" ....}
 
    or with array data :
 
        {  'address': ['a1', 'a2'...] ....}
 
 
 
* unset a exist config's key value  (the value support list)
 
    UPDATE//v2/​{tenant_id}​/os-pci-config/{config-object-name}/<UUID>/unset
 
    with data :
 
        {  'address':"0000:01:00:7"}
 
    this wil remove the "0000:01:00:7" from address key's value set.
 
 
 
* delete a whole config item
 
    DELETE/v2/​{tenant_id}​/os-pci-config/{config-object-name}/<UUID>
 

Latest revision as of 06:31, 11 December 2013

merge back to: https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support