Jump to: navigation, search

Difference between revisions of "Nova/MultipleSchedulerPolicies"

m (Configuration (user story 1))
Line 1: Line 1:
 
= Multiple Active Scheduler Drivers/Policies =
 
= Multiple Active Scheduler Drivers/Policies =
 
== Summary ==
 
== Summary ==
Support for multiple active scheduler policies and/or drivers associated with different host aggregates within a single Nova deployment.
+
Support for multiple active scheduler policies and/or drivers associated with different classes of workloads (within a single Nova deployment).
  
 
Blueprint: https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers
 
Blueprint: https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers
 
== Rationale ==
 
== Rationale ==
In heterogeneous environments, it is often required that different hardware pools are managed under different policies. In Grizzly, basic partitioning of hosts and enforcement of compatibility between flavors and hosts during instance scheduling can be already implemented using host aggregates and FilterScheduler with AggregateInstanceExtraSpecsFilter. However, it is not possible to define, for example, different sets of filters and weights, or even entirely different scheduler drivers associated with different aggregates.  
+
In heterogeneous environments, it is often required that different hardware pools, designed for different classes of workloads, are managed under different policies. In Grizzly, basic partitioning of hosts and enforcement of compatibility between flavors and hosts during instance scheduling can be already implemented using host aggregates and FilterScheduler with AggregateInstanceExtraSpecsFilter. However, it is not possible to define, for example, different sets of filters and weights, or even entirely different scheduler drivers associated with different classes of workloads.  
 
For example, the admin may want to have a pool with a conservative CPU overcommit (e.g., for CPU-intensive workloads), and another pool with aggressive CPU over-commit (for workloads which are less CPU-bound).
 
For example, the admin may want to have a pool with a conservative CPU overcommit (e.g., for CPU-intensive workloads), and another pool with aggressive CPU over-commit (for workloads which are less CPU-bound).
 
This blueprint introduces a mechanism to overcome this limitation.
 
This blueprint introduces a mechanism to overcome this limitation.

Revision as of 17:12, 21 August 2013

Multiple Active Scheduler Drivers/Policies

Summary

Support for multiple active scheduler policies and/or drivers associated with different classes of workloads (within a single Nova deployment).

Blueprint: https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers

Rationale

In heterogeneous environments, it is often required that different hardware pools, designed for different classes of workloads, are managed under different policies. In Grizzly, basic partitioning of hosts and enforcement of compatibility between flavors and hosts during instance scheduling can be already implemented using host aggregates and FilterScheduler with AggregateInstanceExtraSpecsFilter. However, it is not possible to define, for example, different sets of filters and weights, or even entirely different scheduler drivers associated with different classes of workloads. For example, the admin may want to have a pool with a conservative CPU overcommit (e.g., for CPU-intensive workloads), and another pool with aggressive CPU over-commit (for workloads which are less CPU-bound). This blueprint introduces a mechanism to overcome this limitation.
Note: while in large-scale geo-distributed environments this can be done with Cells, there is no existing solution within a single (potentially small) Nova deployment.

User Stories

  1. An administrator partitions the managed environment into host aggregates, decides on specialized scheduler configurations (policies) for some or all of the aggregates, and configures host aggregates and flavors accordingly.
  2. On instance provisioning, the corresponding target aggregate and scheduling policy are determined based on the selected flavor
Note: more options to determine the desired policy will be considered in the future.

Usage Details

Configuration (user story 1)

The administrator will:

  1. Specify 'default' scheduler driver policy in nova.conf, as usual, e.g., FilterScheduler with CoreFilter and AggregateExtraSpecFilter.
  2. Define one or more host aggregates, comprising the desired partitioning of the managed environment, e.g., aggr1 and aggr2, so that each aggregate is designated to certain classes of workloads (e.g., CPU-intensive and CPU-balanced).
  3. (*) Attach new key-value pair to the metadata of each aggregate, specifying the label of the scheduling policy, e.g.: "sched_policy=low_cpu_density" for aggr1, and "sched_policy=high_cpu_density" for aggr2.
  4. Decide which flavors should be used for each of the classes of workloads.
  5. (*) Specify corresponding "sched_policy" key-value in the extra spec of the flavor.
  6. Decide on scheduler properties associated with each policy. Attach corresponding properties with "sched:" prefix to the extra spec (metadata) of the corresponding flavors. E.g., "sched:cpu_allocation_ratio=1.0" for flavor1 and "sched:cpu_allocation_ratio=8.0" for flavor2.

(*) Note: the new "sched_policy" metadata key-value pair is used in order to guarantee correct placement between aggregates, using AggregateExtraSpecFilter. If there are already other key-value pairs that would provide this guarantee, adding "sched_policy" to the aggregate and flavor is not necessary.

Invocation (user story 2)

The user will invoke an instance provisioning request specifying one of the flavors defined by the admin, as usual.

Note: when the flavor does not override any scheduler options, the default scheduler configuration (from nova.conf) will be used, as it has been done before.

Limitations and further enhancements

This implementation has few limitations, which will be addressed via consequent patches/blueprints:

  • The admin needs to ensure that if two flavors have conflicting scheduling policies (e.g., different CPU overcommit levels), then the corresponding instances will not be created on the same host (e.g., by keeping each of the flavors restricted to disjoint sets of aggregates)
  • Currently it is not possible to dynamically change the scheduling policy for VM instances provisioned from a given flavor
  • If the admin wants to manage workloads with the same virtual hardware under different scheduling policies, he will need to create several flavors (for each combination)

Ultimately, we plan to introduce scheduling policies as 'first class citizens' in Nova (DB, CRUD, association with flavors/aggregates/tenants/etc, etc). This will enable resolving most or all the above limitations.