Jump to: navigation, search

WholeHostAllocation

Revision as of 09:02, 24 July 2013 by Phil Day (talk | contribs) (Adding / Removing Servers)
  • Launchpad Entry: NovaSpec:whole-host-allocation
  • Created: 6th May 2013
  • Contributors: Phil Day, HP Cloud Services

Summary

Allow a tenant to allocate all of the capacity of a host for their exclusive use. The host remains part of the Nova configuration, i.e this is different from bare metal provisioning in that the tenant is not getting access to the Host OS - just a dedicated pool of compute capacity. This gives the tenant guaranteed isolation for their instances, at the premium of paying for a whole host.

Extending this further in the future could form the basis of hosted private clouds – i.e. schematics of having a private could without the operational overhead.

The proposal splits into three parts:

  1. Allowing the user to define and manage a pool of servers
  2. Adding / Removing servers to that pool, and generating the associated notification messages for billing
  3. Allowing the user to schedule instances to a specific pool that they own or have rights to use

Defining and managing pools of servers

The basic mechanism for grouping servers into pools with associated metadata already exists in the form of Host aggregates, and there are already various parts of the code that rely on specific metadata values for things like scheduling schematics. However host aggregates were never designed or intended to be directly use facing:

  • There is no concept of an aggregate owner
  • They have a simple ID (rather than a uuid)
  • Metadata key values are effectively internal details of Nova
  • There are no quotas associated with aggregates

So instead of changing aggregates to be user facing we will provide a wrapper layer that creates aggregrates in a controlled way with a limited set of pre-defined metadata. We call these "wrapped" aggregates Pclouds (Private Clouds)

A user can perform the following opertaions on a Pcloud:

Create a Pcloud: User specifies a descriptive name of the Pcloud. The system will validate the request against quotas and create a host aggregate with the following properties:

  • The aggregate name is set to a generated uuid (This is what the user will use to identify the Pcloud in subsequent calls)
  • The user supplied name is stored as a metadata value of the aggregate
  • The project_id of the owner is stored as a metadata value of the aggregate


Add a project to a Pcloud: Pclouds are owned by a single project for quota, billing, and mgmt purposes. But the owner may want to allow other projects to use the resources within a Pcloud (a common use case in enterprises). So the owner of a Pcloud is allowed to add project_Ids as tenants of the Pcloud, which are then recorded in the aggregate metadata. Because metadata keys have to be unique, and metadata values are limited in length, project access rights are recorded as a metadata item per project using the project ID as the key. Note that owner of a Pcloud is automatically added as a tenant and creation time, but they can remove themselves if required.


Remove a project from a Pcloud: Simply deletes the metadata key for that project

Change the properties of a Pcloud:

(Future): Provides a controlled mechanism to update other aggregate metadata values that affect scheduling - for example providing a different value of the CPU or Memory over commit


List Pclouds: List all Pclouds that are available to a project. This includes Pclouds that the project has access rights to (so Projects can "discover" which Plcouds they can use without having to ask the Pcloud owner). Both the UUID and Name are shown in the list, along with the role (Owner or Tenant). If the project is the Owner of a Pcloud then they can also see the number of hosts in the Pcloud


Show Pcloud Details: Only available to the owner of the Pcloud, show details such as the list of projects allowed access, list of hosts, and instances running on those hosts. Hosts will be presented as a hash of hostname and project_id (as in the server details)


Delete a Pcloud: Remove a Pcloud. Can only be performed when there are no hosts associated with the Pcloud.

Adding / Removing Servers

There are a number of different ways that this could be specified and implemented, but to start with we propose to keep to a very simple definition and mechanism:

  • The types of hosts available will be specified as "host flavors" (this is consistent with Ironic). Each host flavor will define the #CPUs, Memory, and Disk space of a server, possibly with other attributes in extra_specs. The flavor table will be extended to include an attribute which defines if a flavor is an instance or host flavour. The current flavor APIs will be constrained to show and work on just instance flavors.
  • There will be a pool of prebuild but disabled servers in a specific aggregate (the Pcloud_free_pool aggregate) that are used exclusively for whole host allocation. (Don't want the complexity at this stage of interacting with the scheduler to find a suitable host with no VMs on it)
  • A user will be able to request that a host of a particular host flavour and from a particular AZ is added to a Pcloud. The system will look for a suitable host from the pcloud_free_pool move it to the aggregate behind the Pcloud, and enable it.
  • A user will be able to request that a specific host (identified by its hash) is removed from a Pcloud. Providing there are no instances on the host, the system will disable the service, and move the host back to the pcloud_free_pool


In addition to notification messages that are generated on allocation and deallocation of a host, there also needs to be the equivalent of a instance.exists message (this is typically needed by billing systems for example to ensure consistency). This could be generated by the periodic task in the compute manager that currently generates the instance.exists messages. It may also be necessary to extend the instance.exists messages to include the uuid of Pcloud the host is part of.

Schedule instances to a specific pool

Users should be able to specify whether they want to schedule to a specific Pcloud or the general resource pool (I.e they are not constrained to one or the other). Neither of the Current Scheduler isolation filters provide this. The user can specify teh uuid of a Pcloud to use via scheduler hints

The basic flow of the Pcloud filter is as follows:

   If a pcloud is specified in scheduler hints
        if the host is not part of the specified Pcloud:
             Fail
        else if the user does not has access to Pcloud:
            Fail
        else
            Pass
   else if the host is part of a Pcloud
       Fail
   else
       Pass