Jump to: navigation, search

Difference between revisions of "WholeHostAllocation"

(Defining and managing pools of servers)
(Summary)
 
(10 intermediate revisions by the same user not shown)
Line 4: Line 4:
  
 
== Summary ==
 
== Summary ==
Allow a tenant to allocate all of the capacity of a host for their exclusive use.  The host remains part of the Nova configuration,  i.e this is different from bare metal provisioning in that the tenant is not getting access to the Host OS - just a dedicated pool of compute capacity.  This gives the tenant guaranteed isolation for their instances, at the premium of paying for a whole host.
+
Pclouds (PrivateClouds) allow a tenant to allocate all of the capacity of a host for their exclusive use.  The host remains part of the Nova configuration,  i.e this is different from bare metal provisioning in that the tenant is not getting access to the Host OS - just a dedicated pool of compute capacity.  This gives the tenant guaranteed isolation for their instances, at the premium of paying for a whole host.
   
+
 
Extending this further in the future could form the basis of hosted private clouds – i.e. schematics of having a private could without the operational overhead.
+
The Pcloud mechanism provides an abstraction between the user and the underlying physical servers. A user can uniquely name and control servers in the Pcloud, but then never get to know the actual host names or perform admin commands on those servers.   
 +
 
 +
The initial implementation provides all of the operations need create and manage a pool of servers, and to schedule instances to them.  Extending this further in the future could allow additional operations such as specific scheduling policies, control over which flavors and images can be used, etc.
 +
 
 +
Pclouds are implemented as an abstraction layer on top of Host Aggregates.  Each host flavor pool and each Pcloud is an aggregate.  Properties of the Host Flavors and Pclouds are stored as aggregate metadata.    Hosts in a host flavor pool are not available for general scheduling.  Host are allocated to Pclouds by adding them into the Pcloud aggregate. 
 +
(they may also be in other types of aggregates as well, such as an AZ). All of the manipulation of aggregates including host allocation is performed by the Pcloud API layer - users never have direct access to the aggregates, aggregate metadata, or hosts.   (Admins of course can still see and manipulate the aggregates)
 +
 
 +
[[File:Pclouds.png]]
  
The proposal splits into three parts:
+
=== Supported Operations ===
  
# Allowing the user to define and manage a pool of servers
+
The Cloud Administrator can:
# Adding / Removing servers to that pool, and generating the associated notification messages for billing
+
* Define the types of server that can be allocated (host-flavors)
# Allowing the user to schedule instances to a specific pool that they own or have rights to use
+
* Add or remove physical servers from a host-flavor pool
 +
* See details of a host flavor pool
  
=== Defining and managing pools of servers ===
 
The basic mechanism for grouping servers into pools with associated metadata already exists in the form of Host aggregates, and there are already various parts of the code that rely on specific metadata values for things like scheduling schematics.    However host aggregates were never designed or intended to be directly use facing:
 
  
* There is no concept of an aggregate owner
+
A User can:
* They have a simple ID (rather than a uuid)
+
* Create a Pcloud
* Metadata key values are effectively internal details of Nova
+
* See the list of available host flavors
* There are no quotas associated with aggregates
+
* Allocate hosts of a particular host-flavor and availability zone to their Pcloud
 +
* Enable or disable hosts in their Pcloud
 +
* Authorize other tenants to be able to schedule to their Pcloud
 +
* Set the ram and cpu allocation ratios used by the scheduler within their Pcloud
 +
* See the list of hosts and instances running in their Pcloud
  
So instead of changing aggregates to be user facing we will provide a wrapper layer that creates aggregrates in a controlled way with a limited set of pre-defined metadata.  We call these "wrapped" aggregates Pclouds (Private Clouds)
 
  
A user can perform the following opertaions on a Pcloud:
+
A Tenant that has been authorized by a Pcloud can:
+
* Schedule an instance to run in the Pcloud.   Scheduling is controlled by an additional filter, so all other filters (Availability Zones, affinity, etc) are still valid
'''Create a Pcloud:''' 
 
User specifies a descriptive name of the Pcloud.   The system will validate the request against quotas and create a host aggregate with the following properties:
 
* The aggregate name is set to a generated uuid (This is what the user will use to identify the Pcloud in subsequent calls)
 
* The user supplied name is stored as a metadata value of the aggregate
 
* The project_id of the owner is stored as a metadata value of the aggregate
 
  
 +
=== Host Flavors ===
  
'''Add a project to a Pcloud:'''
+
Host flavors define the types of hosts that users can allocate to their Pclouds (c.f. flavors for instances)Because physical host have fixed characteristics a host flavor is defined by just three properties:
Pclouds are owned by a single project for quota, billing, and mgmt purposesBut the owner may want to allow other projects to use the resources within a Pcloud (a common use case in enterprises).  So the owner of a Pcloud is allowed to add project_Ids as tenants of the Pcloud, which are then recorded in the aggregate metadata.    Because metadata keys have to be unique, and metadata values are limited in length, project access rights are recorded as a metadata item per project using the project ID as the key.
+
* An unique Identifier
Note that owner of a Pcloud is automatically added as a tenant and creation time, but they can remove themselves if required.
+
* A description (which can include details of the number of cpus memory, etc)
 +
* A Units value which is used to control the quota cost of a host
  
 +
Host Flavors are implemented as aggregates with the name "Pcloud:host_flavor:<id>", and have the following metadata values:
  
'''Remove a project from a Pcloud:'''
+
{| class="wikitable"
Simply deletes the metadata key for that project
+
|-
 +
! Key !! Value !! Notes
 +
|-
 +
| pcloud:type || 'host_flavor_pool' || Identifies this as a host flavor aggregate
 +
|-
 +
| pcloud:flavor_id || ''flavor_id'' ||
 +
|-
 +
| pcloud:description || ''description'' || A description of the host type 
 +
|-
 +
| pcloud:units || ''units'' || The number of quota units consumed by a host of this type
 +
|}
  
=== Change the properties of a Pcloud: ===
+
A host must be empty and its compute service disabled before it can be added to a host flavor pool.  Adding it to the host flavor pool will enable the compute-service (the Pcloud scheduler filter will stop anything from being schedule to it until it is allocated to a Pcloud).  Hosts remain in this aggregate even when they are allocated to a Pcloud - membership of the host flavor aggregate defines the the host flavor of the host - so there is no need to keep a separate mapping of hosts to host types.   
(Future):  Provides a controlled mechanism to update other aggregate metadata values that affect scheduling - for example providing a different value of the CPU or Memory over commit
 
  
  
'''List Pclouds:'''
+
=== Pclouds ===
List all Pclouds that are available to a project.  This includes Pclouds that the project has access rights to (so Projects can "discover" which Plcouds they can use without having to ask the Pcloud owner).  Both the UUID and Name are shown in the list, along with the role (Owner or Tenant).  If the project is the Owner of a Pcloud then they can also see the number of hosts in the Pcloud
 
  
 +
A Pcloud is in effect a user defined and managed aggregate.  However the Pclouds API provides an abstraction layer to control ownership and provide authorization schematics, and encapsulate system system properties such as host names.  On creation a Pcloud is assigned a uuid, which forms part of the aggregate name and provides the Pcloud ID for users to use in the API.
  
'''Show Pcloud Details:'''
+
Pclouds are implemented as aggregates with the name "Pcloud:<uuid>", and have the following metadata values on creation:
Only available to the owner of the Pcloud, show details such as the list of projects allowed access, list of hosts, and instances running on those hosts.  Hosts will be presented as a hash of hostname and project_id (as in the server details)
 
  
 +
{| class="wikitable"
 +
|-
 +
! Key !! Value !! Notes
 +
|-
 +
| pcloud:type || 'pcloud' || Identifies this as a pcloud aggregate
 +
|-
 +
| pcloud:id || ''uuid'' || The generated uuid for this Pcloud
 +
|-
 +
| pcloud:name || ''name'' || The user assigned name for this Pcloud 
 +
|-
 +
| pcloud:owner || ''tenant_id'' || The tenant_id that created the Pcloud.  Used for authorisation and billing 
 +
|}
 +
  
'''Delete a Pcloud:'''
+
==== Host Operations ==== 
Remove a Pcloud. Can only be performed when there are no hosts associated with the Pcloud.   
 
 
    
 
    
=== Adding / Removing Servers  ===
+
A user adds a host by specifying a name (which must be unique within the pcloud) and the required host flavor type and optionally an availability zone.    The Plcoud API finds a free host in the appropriate host flavor aggregate and adds it to the Pcloud aggregate.  Note that because this is a simple allocation mechanism there is no need to call to the scheduler, there is either a free host in the host flavor aggregate or there isn't.
There are a number of different ways that this could be specified and implemented, but to start with we propose to keep to a very simple definition and mechanism:
+
 
 +
In adding the host to the Pcloud aggregate the following additional metadata values are also added   
 +
 
 +
{| class="wikitable"
 +
|-
 +
! Key !! Value !! Notes
 +
|-
 +
| pcloud:host:<name> || ''hypervisor_hostname'' || Provides the mapping from the users name to the real hostname
 +
|-
 +
| pcloud:host_state:<hostname> || ''disabled'' || Controls whether a host can be used for scheduling.  Hosts are always disabled when added
 +
|}
  
* The types of hosts available will be specified as "host flavors" (this is consistent with Ironic).    Each host flavor will define the #CPUs, Memory, and Disk space of a server, possibly with other attributes in extra_specs.    The flavor table will be extended to include an attribute which defines if a flavor is an instance or host flavour.  The current flavor APIs will be constrained to show and work on just instance flavors. 
 
  
* There will be a pool of prebuild but disabled servers in a specific aggregate (the Pcloud_free_pool aggregate) that are used exclusively for whole host allocation(Don't want the complexity at this stage of interacting with the scheduler to find a suitable host with no VMs on it)
+
Because the allocation occurs in the API sever, there is a risk of a race condition when two allocation requests are processed at the same time.  The Pcloud API provide a protection  against this by:
 +
* Host are selected randomly from the list of available hosts.
 +
* After the host has been added to the Pcloud aggregate a further check is made to see how many Pcloud aggregates the host is a member ofIf it is a member of more than one Pcloud aggregate then it is removed from the Pcloud and the allocation re-tried (up to a configurable number of attempts).  In this way if more that one Pcloud allocates the same host one or all of them will release it and retry.   
  
* A user will be able to request that a host of a particular host flavour and from a particular AZ is added to a PcloudThe system will look for a suitable host from the pcloud_free_pool move it to the aggregate behind the Pcloud, and enable it.
+
In order to remove a host from a Pcloud it must be empty (so that it can be used by another Pcloud) and in order to empty it the Pcloud owner must be able to prevent further instances from being scheduled to itPclouds therefore provide a simple enable / disable mechanism for hosts, which is recorded in the metadata and implemented by the Pcloud filter.  The underlying service enable / disable is not used so that this remains available to the System administrators.
  
* A user will be able to request that a specific host (identified by its hash) is removed from a Pcloud.   Providing there are no instances on the host, the system will disable the service, and move the host back to the pcloud_free_pool
+
A host must be both empty and disabled before it can be removed fro a Pcloud.
  
 +
The Pcloud Owner can see a list of hosts, their states, and the uuid's of instances on those hosts
  
=== Schedule instances to a specific pool  ===
 
Users should be able to specify whether they want to schedule to a specific Pcloud or the general resource pool (I.e they are not constrained to one or the other).  Neither of the Current Scheduler isolation filters provide this.  The user can specify teh uuid of a Pcloud to use via scheduler hints
 
 
 
The basic flow of the Pcloud filter is as follows:
 
  
    If a pcloud is specified in scheduler hints
+
==== Tenant Operations ====
        if the host is not part of the specified Pcloud:
 
              Fail
 
        else if the user does not has access to Pcloud:
 
            Fail
 
        else
 
            Pass
 
    else if the host is part of a Pcloud
 
        Fail
 
    else
 
        Pass
 
  
== Use Cases ==
+
The owner of a Pcloud can authorize other tenants to be able to schedule to the Pcloud.  Authorizing a tenant adds the following record to the aggregate metadata:
  
The required features are explored by stepping through the main use cases:
+
{| class="wikitable"
+
|-
1. Tenant allocates a host:
+
! Key !! Value !! Notes
1.1  Make the request to the API. 
+
|-
+
| pcloud:<tenant_id> || 'pcloud-tenant' || Indicates that <tenant_id> can schedule to this Pcloud
Q) What do they need to be able to specify ?
+
|}
Required size characteristics. 
 
Flavors would seem to be the natural thing to use here.  Maybe we extend the flavor definition to include a flavor type (instance, host, bare metal).    Note that Bare Metal also uses flavors in this way, adding a type would allow combined BM and non BM systems.
 
  
Q) Should this have to be an exact match (Mem,Disk,CPU), a minimum, or should we allow the cloud provider to tag the hosts in some way to match the flavors ?
+
Note that to avoid being limited by the metadata record size a separate metadata value is created for each authorized tenant.
 +
Note that owner of a Pcloud is automatically added as an authorized tenant and creation time, but they can remove themselves if required.
 
   
 
   
Pool name:
 
The tenant might want to have several pools of servers.  Behind the scenes the system would create a host aggregate (project_id-pool_name).  Having a default pool ensures at least one aggregate, and allow us to use aggregates as a way to manage the hosts allocated to a tenant.    This saves us having to expose the aggregate API directly to tenants  (Don’t want to do this as aggregates work on host names, don’t’ have an owner, etc)
 
 
Availability  Zone:
 
Tenant should be able to build pools of hosts that span AZs.
 
  
Min/max count:
+
==== Scheduler Configuration ====
Tenant should be able to request multiple hosts in a single call
 
 
Don’t need image, Security Group, key name, metadata, nics, config_drive as these only apply to an instance.
 
 
Q) Should this be a new API call, or overloaded onto “server create” as with Bare Metal Provisioning ?
 
Don’t really want to return an instance, and the parameter overlap with server create is pretty limited, so think it should be a new API
 
 
1.2) System validates request against the Quotas:
 
Need a new set of quotas to cover hosts:  #hosts, #cpus, memory(GB), disk(GB)
 
   
 
1.3) System allocates a free host, and adds it to the aggregate corresponding to the specified pool.
 
Q) Do we need a scheduler for this, or can we do it in the API ?
 
There is no need to interact with the compute manager, and no complex sequence of state changes (the host is already in the required state), so no real need for a separate scheduler.
 
 
Q) Do we allocate from the general pool of hosts, or does there need to be a way to create a reserved pool ?
 
Can see a case for either, depending on how the cloud provider wants to manage their capacity.  So provide configuration option to identify the host aggregate that hosts are allocated from. Add an aggregate filter to the main scheduler so that it can also be limited to allocating from specific aggregates.  These aggregates used for instance creation and whole host allocation can then overlap or be disjoint.
 
 
Q) How does the system record that the host allocated ?
 
Just transferring it into the tenant’s aggregate is enough – providing we make this an atomic operation when moving it from the "allocation aggregate".  Otherwise we would need to do something like adding a project_id to the compute_hosts table.
 
  
Q) What do we return to the tenant ?
+
Where a scheduler filter can be configured by aggregate properties its possible to provide that configuration on a per Pcloud basis.
Don’t really want to return host names, as these should be private to the cloud provider.   Can't use the same hash of hostname and tenant that are used in server details, as we need to be able to extract the hostname for deallocation (see 3).  So maybe we return id  of the compute_nodes entry.  Would be better if this was a uuid rather than an id.
+
Currently the Core Filter and Ram Filter support per Aggregate configuration, and so the Pcloud API provides the capability to set the appropriate values on the underlying aggregate
 
1.4) System generates a notification event for the billing system
 
Need a new class of even to represent hosts. Also need periodic “exists” event (billing systems like regular confirmation for self correction in case they miss an event)
 
 
Q)  What else does the billing system need to be able to charge differently for instances on hosts allocated to tenants (the tenant is already paying for the host)?
 
Add a list of aggregates the host belongs to in the instance notification events ?
 
 
      
 
      
 
2)  Tenant creates an instance on one of their allocated hosts
 
Since we’re using host aggregates to manage the hosts, this just requires a scheduler hint that can be used by a filter to identify the pool and translate into the aggregate name.    Tenants can still also create instances on the general pool of hosts by not giving the scheduler hint.
 
  
   
+
=== Using a Pcloud ===
3) Tenant deallocates a host
+
 
System moves the host out of the project specific aggregate and back into the aggregate that it is using for whole host scheduling
+
Scheduling is controlled by the PcloudFilter, which must be configured as part of the filter scheduler.
If there are instances running on the host then the API should refuse to de-allocate it.
+
 
+
A User requests an instance to be scheduler to a Pcloud by passing the scheduler hint "pcloud=<pcloud_uuid>".
+
 
 +
The Pcloud filter implements the following rules for each host:
 +
 
 +
* If the user has not specified a Pcloud, a host only passes if it neither a Pcloud or Host Flavor aggregate
 +
* If the user has specified a Pcloud, a host only passes if it is in the specified Pcloud, is Enabled, and the Tenant is authorized to use that Pcloud 
 +
* In all other cases the host does not pass
 +
 
 +
 
 +
=== Examples of using Pclouds === 
  
== Extended use cases (maybe future additional Blue Prints ?) ==
+
See here for examples using nova client [[WholeHostAllocation-pcloud-example|Pcloud Examples]]
 
4) Tenant defines their own flavor(s) to be used within their pool(s)
 
One of the use cases for having dedicated hosts is that the tenant wants to be able to use the full capacity of a host, or to create “off size” flavours that that Cloud Provider doesn’t want to have generally available (as they become hard to build pricing plans around).
 
 
There is a flavoraccess API extension, but looks like this just allows an admin to define access policies on flavours, not give control of specific flavours at the tenant level.  Also not clear if  the access polict is actually validated during instance creation
 
 
Maybe need a new TenantFlavor extension which creates the flavors as private to the calling tenant and adds the required extra_data to limit them to the aggregates owned by this tenant ?
 
 
 
5) Tenant can define scheduling policy for their hosts
 
For example they may want to be able to specify a different rate of over provisioning, or maybe even a different set of filters.
 

Latest revision as of 22:03, 30 October 2013

  • Launchpad Entry: NovaSpec:whole-host-allocation
  • Created: 6th May 2013
  • Contributors: Phil Day, HP Cloud Services

Summary

Pclouds (PrivateClouds) allow a tenant to allocate all of the capacity of a host for their exclusive use. The host remains part of the Nova configuration, i.e this is different from bare metal provisioning in that the tenant is not getting access to the Host OS - just a dedicated pool of compute capacity. This gives the tenant guaranteed isolation for their instances, at the premium of paying for a whole host.

The Pcloud mechanism provides an abstraction between the user and the underlying physical servers. A user can uniquely name and control servers in the Pcloud, but then never get to know the actual host names or perform admin commands on those servers.

The initial implementation provides all of the operations need create and manage a pool of servers, and to schedule instances to them. Extending this further in the future could allow additional operations such as specific scheduling policies, control over which flavors and images can be used, etc.

Pclouds are implemented as an abstraction layer on top of Host Aggregates. Each host flavor pool and each Pcloud is an aggregate. Properties of the Host Flavors and Pclouds are stored as aggregate metadata. Hosts in a host flavor pool are not available for general scheduling. Host are allocated to Pclouds by adding them into the Pcloud aggregate. (they may also be in other types of aggregates as well, such as an AZ). All of the manipulation of aggregates including host allocation is performed by the Pcloud API layer - users never have direct access to the aggregates, aggregate metadata, or hosts. (Admins of course can still see and manipulate the aggregates)

Pclouds.png

Supported Operations

The Cloud Administrator can:

  • Define the types of server that can be allocated (host-flavors)
  • Add or remove physical servers from a host-flavor pool
  • See details of a host flavor pool


A User can:

  • Create a Pcloud
  • See the list of available host flavors
  • Allocate hosts of a particular host-flavor and availability zone to their Pcloud
  • Enable or disable hosts in their Pcloud
  • Authorize other tenants to be able to schedule to their Pcloud
  • Set the ram and cpu allocation ratios used by the scheduler within their Pcloud
  • See the list of hosts and instances running in their Pcloud


A Tenant that has been authorized by a Pcloud can:

  • Schedule an instance to run in the Pcloud. Scheduling is controlled by an additional filter, so all other filters (Availability Zones, affinity, etc) are still valid

Host Flavors

Host flavors define the types of hosts that users can allocate to their Pclouds (c.f. flavors for instances). Because physical host have fixed characteristics a host flavor is defined by just three properties:

  • An unique Identifier
  • A description (which can include details of the number of cpus memory, etc)
  • A Units value which is used to control the quota cost of a host

Host Flavors are implemented as aggregates with the name "Pcloud:host_flavor:<id>", and have the following metadata values:

Key Value Notes
pcloud:type 'host_flavor_pool' Identifies this as a host flavor aggregate
pcloud:flavor_id flavor_id
pcloud:description description A description of the host type
pcloud:units units The number of quota units consumed by a host of this type

A host must be empty and its compute service disabled before it can be added to a host flavor pool. Adding it to the host flavor pool will enable the compute-service (the Pcloud scheduler filter will stop anything from being schedule to it until it is allocated to a Pcloud). Hosts remain in this aggregate even when they are allocated to a Pcloud - membership of the host flavor aggregate defines the the host flavor of the host - so there is no need to keep a separate mapping of hosts to host types.


Pclouds

A Pcloud is in effect a user defined and managed aggregate. However the Pclouds API provides an abstraction layer to control ownership and provide authorization schematics, and encapsulate system system properties such as host names. On creation a Pcloud is assigned a uuid, which forms part of the aggregate name and provides the Pcloud ID for users to use in the API.

Pclouds are implemented as aggregates with the name "Pcloud:<uuid>", and have the following metadata values on creation:

Key Value Notes
pcloud:type 'pcloud' Identifies this as a pcloud aggregate
pcloud:id uuid The generated uuid for this Pcloud
pcloud:name name The user assigned name for this Pcloud
pcloud:owner tenant_id The tenant_id that created the Pcloud. Used for authorisation and billing


Host Operations

A user adds a host by specifying a name (which must be unique within the pcloud) and the required host flavor type and optionally an availability zone. The Plcoud API finds a free host in the appropriate host flavor aggregate and adds it to the Pcloud aggregate. Note that because this is a simple allocation mechanism there is no need to call to the scheduler, there is either a free host in the host flavor aggregate or there isn't.

In adding the host to the Pcloud aggregate the following additional metadata values are also added

Key Value Notes
pcloud:host:<name> hypervisor_hostname Provides the mapping from the users name to the real hostname
pcloud:host_state:<hostname> disabled Controls whether a host can be used for scheduling. Hosts are always disabled when added


Because the allocation occurs in the API sever, there is a risk of a race condition when two allocation requests are processed at the same time. The Pcloud API provide a protection against this by:

  • Host are selected randomly from the list of available hosts.
  • After the host has been added to the Pcloud aggregate a further check is made to see how many Pcloud aggregates the host is a member of. If it is a member of more than one Pcloud aggregate then it is removed from the Pcloud and the allocation re-tried (up to a configurable number of attempts). In this way if more that one Pcloud allocates the same host one or all of them will release it and retry.

In order to remove a host from a Pcloud it must be empty (so that it can be used by another Pcloud) and in order to empty it the Pcloud owner must be able to prevent further instances from being scheduled to it. Pclouds therefore provide a simple enable / disable mechanism for hosts, which is recorded in the metadata and implemented by the Pcloud filter. The underlying service enable / disable is not used so that this remains available to the System administrators.

A host must be both empty and disabled before it can be removed fro a Pcloud.

The Pcloud Owner can see a list of hosts, their states, and the uuid's of instances on those hosts


Tenant Operations

The owner of a Pcloud can authorize other tenants to be able to schedule to the Pcloud. Authorizing a tenant adds the following record to the aggregate metadata:

Key Value Notes
pcloud:<tenant_id> 'pcloud-tenant' Indicates that <tenant_id> can schedule to this Pcloud

Note that to avoid being limited by the metadata record size a separate metadata value is created for each authorized tenant. Note that owner of a Pcloud is automatically added as an authorized tenant and creation time, but they can remove themselves if required.


Scheduler Configuration

Where a scheduler filter can be configured by aggregate properties its possible to provide that configuration on a per Pcloud basis. Currently the Core Filter and Ram Filter support per Aggregate configuration, and so the Pcloud API provides the capability to set the appropriate values on the underlying aggregate


Using a Pcloud

Scheduling is controlled by the PcloudFilter, which must be configured as part of the filter scheduler.

A User requests an instance to be scheduler to a Pcloud by passing the scheduler hint "pcloud=<pcloud_uuid>".

The Pcloud filter implements the following rules for each host:

  • If the user has not specified a Pcloud, a host only passes if it neither a Pcloud or Host Flavor aggregate
  • If the user has specified a Pcloud, a host only passes if it is in the specified Pcloud, is Enabled, and the Tenant is authorized to use that Pcloud
  • In all other cases the host does not pass


Examples of using Pclouds

See here for examples using nova client Pcloud Examples