Note: This blueprint appears to be abandoned, having last been edited in 2011.
- Launchpad Entry: NovaSpec:foo or SwiftSpec:foo
A cloud compute scheduler needs to implement core policies of the infrastructure in order to effectively manage workloads in a datacenter environment. Policies such as matching workloads to compute resources, managing the power and thermal environment, scheduling decisions based upon security requirements and optimal Service Level Agreements (e.g. Quality of Service) are required for best data center utilization. Workload placement also needs to meet any requirements imposed by networking and storage capabilities. This blueprint describes a scheduler designed to meet these requirements.
The scheduler will be incorporated initially as a simple engine with a default policy plug-in. This simple scheduler can then be extended with more complex plug-ins that take advantage of policies and constraints stored in the system DataBase.
This blueprint proposes an effort to incorporate into OpenStack a robust, richly featured scheduler for optimal workload placement and lifecycle management.
Major data center operators require core levels of functionality for use in complex environments. Functionality should start with dense workload placement and then move on to considerations involving power, thermal, security, and SLA.
- The scheduler is distributed (meaning each instance is stateless) and shares a common store for current state. This common store is also used to hold policies and constraints.
- Replaceable modules are required to support various algorithms for policy and constraint evaluation. I.E., no one single algorithm can be used for all situations. For example, different customers will want to put more emphasis on power vs. QoS or vice versa.
The Policy & Constraint based Scheduler (PCS) will be a plug-in that extends the capability of the current Nova scheduler. The PCS will work in conjunction with the Network and Volume controllers to provide robust, reliable, trusted, and transparent scheduling decisions. Not only does the PCS have to work, but its algorithms must deliver repeatable, reliable, fast, predictable, and consistent results.
- Policy & Constraint based Scheduler Attributes
- Scheduler is distributed (each instance is stateless) sharing common storage for state, policies and constraints in the system wide DB.
- Allows for data aggregation from child schedulers or worker up through all scheduling layers
- Scheduler SHOULD be able to send workload placement requests to child schedulers or workers
- All actions are logged
- Policy/Constraint Db Attributes
- Policies and constraints are kept in a versioned database
- Db supports rollback and change logging
- Server/zones can optionally communicate capabilities, resource availability levels and support for policy attributes. The scheduler should store such information in the policy/constraint Db.
- Policies are formed based on instance requirements like ?secure instances?, ?flexible network instance? etc.
- Policies can result in the formation of Server Groups within zones in order to service the policy
- Host hardware information in the Db allows transferring scheduler policies to hardware policies if applicable.
- Workload characteristics
- Attributes data: Compute, Security/Trust, Power, Network, Storage and QoS
- Constraints: Power cap
- Workload metadata representing the workload requirements (provided by the Customer or the Cloud Service Provider and may be authenticated by the CSP)
- The PCS engine uses a common policy/constraint DB
- Workers/Servers/zones that do not advertise capabilities default to a standard scheduler action
- Child scheduler MUST respond to a Workload placement Request (Accept/Reject)
- Child Scheduler MUST interpret workload placement request. Request MAY include Policy and Workload Metadata
- Scheduler MUST be able to re-Place workload based on Worker/Child Scheduler indication.
- Worker/Child Scheduler MAY be able to indicate a request to re-place the workload based on supported policy.
- Policy Types (to be extensible)
- Performance (select right compute element for optimum performance)
- Trust (place workloads on trusted servers in a trusted environment: node or zone)
- Power (place workload for optimal power utilization, dynamic power adjustments, power cap)
- Thermal (manage datacenter temperatures)
- SLA / QoS (place workloads on compute elements most likely to deliver the committed performance and availability)
- New workloads placed based on policies and constraints of server / zones information and workload characteristics
- Workloads migrated based on policies and constraints of server / zones information,
- Workloads may be migrated within a ServerGroup if cluster balance is a requirement
- Workloads may be migrated based on Host health
- Workloads can be moved (or terminated) based on rebalancing events triggered by Scheduler or Worker
- Supports a ?what-if? mode to evaluate impacts of proposed policy/constraint Db changes
- Child scheduler MAY be able to send a Power/Thermal alert
- Scheduler MAY migrate workloads based upon alerts
- Scheduler MAY be able to communicate to a Power Control Module a request for number of available works and their power states (ON, Throttled, Stand By or OFF)
Diagram of the Scheduler attachment:pc_sched.png
- Policy and Constraint (P&C) based scheduler: This is the core element of the scheduler that interfaces with the OpenStack scheduler interface. This block hosts the pluggable Resolver and DB elements.
- P&C Resolver: This pluggable element implements specific P&C algorithms. This block takes as input the action to be evaluated, uses the existing P&C Db, and the existing data to satisfy the data needs of the P&C to be evaluated. The output is a specific recommendation for placement and configuration of the action under evaluation.
- P&C DB: This pluggable module hosts the data store for the policies and constraints. The goal is to isolate the schema and details of the storage from the rest of the solution.
- Host DB: Information on host hardware to allow for specific hardware policy and alert structure
- Workload characteristics class for specifying workload characteristics to map to instance requirements
- Data Collection: This module gathers the data from the environment (including servers, storage, network, facilities, workload etc.) that are needed for the evaluation of a given action.
- Admin: these elements are used to configure the data collection and adjust the policies and constraints as needed.
- A workload to be placed and configured is presented to the P&C based scheduler along with its requirements (UE: metadata format???).
- The P&C scheduler validates the request and hands the request off to the P&C resolver.
- Assumes that the infrastructure owner has previously populated the P&C requirements for the data center (thermals, users, etc.).
- Gather the P&C?s that apply to this workload.
- Gather workload characteristic pre-evaluated and stored in DB
- Gather the advertised Host characteristics
- Gather the data required to evaluate this P&C.
- Evaluate. Log the results of the evaluation.
- Produce the recommendation.
This section should describe a plan of action (the "how") to implement the changes discussed. Could include subsections like:
Should cover changes required to the UI, or specific UI that is required to implement this
Code changes should include an overview of what needs to change, and in some cases even the specific details.
- data migration, if any
- redirects from old URLs to new ones, if any
- how users will be pointed to the new way of doing things, if necessary.
This need not be added or completed until the specification is nearing beta.
This should highlight any issues that should be addressed in further specifications, and not problems with the specification itself; since any specification with problems cannot be approved.
BoF agenda and discussion
Use this section to take notes during the BoF; if you keep it in the approved spec, use it for summarising what was discussed and note any options that were rejected.