Jump to: navigation, search

Difference between revisions of "SchedulerImplementation"

 
m (Text replace - "__NOTOC__" to "")
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
__NOTOC__
+
 
== Scheduler Implementation ==  
+
== Scheduler Implementation ==
  
 
Live Notes may be taken for this topic at: http://etherpad.openstack.org/Scheduler
 
Live Notes may be taken for this topic at: http://etherpad.openstack.org/Scheduler
 +
 +
A request comes in from a user to  build a new slice/instance/vm.
 +
 +
Given many thousands potential  nodes, how do we decide where to put it?
 +
 +
Identify the things that factor  into the decision, such as:
 +
 +
* Qualitative suitability measures (hard rules)
 +
** Is  it "active" (nodes will likely have some sort of state ("active",  "maintenance", etc.)?
 +
** is the hardware current or  being phased out?
 +
** Will it fit? (a 16 GB slice  won't fit on a node with only 256 MB free)
 +
* Quantitative suitability (weighting)
 +
** State  (more states may be eligible for new VMs, but some might be preferred  over others (e.g. "active" servers should be preferred over "spare"  servers, but if need be, a "spare" server is eligible)
 +
** How busy is the server doing other things (like building VM's or  cleaning up after old ones)?
 +
** Does  the user have other VM's in the same cluster (not a hard rule, because  we only have that many clusters)
 +
** Does  the user have other VM's on the same node (not a hard rule, because we  only have that many nodes)
 +
*** Does the node have a cached  copy of the disk image already?
 +
 +
Do  we need "best match", "good enough", or "not too shabby"?
 +
 +
Discussion to include Auctioning  and "go get it" as well.
 +
ideas:
 +
* host  responds only if valid candidate
 +
* then  least busy wins
 +
* at launch queue, like coin  sorter (more complex)
 +
* first to answer wins unless  large, then least busy
 +
* host intelligent better  because it scales, higher level intelligence does not
 +
* pluggable
 +
 +
Current  Ozone implementation of Server Best Match (SBM):
 +
* Hard rules
 +
* Soft rules
 +
* best weighted winds

Latest revision as of 23:30, 17 February 2013

Scheduler Implementation

Live Notes may be taken for this topic at: http://etherpad.openstack.org/Scheduler

A request comes in from a user to build a new slice/instance/vm.

Given many thousands potential nodes, how do we decide where to put it?

Identify the things that factor into the decision, such as:

  • Qualitative suitability measures (hard rules)
    • Is it "active" (nodes will likely have some sort of state ("active", "maintenance", etc.)?
    • is the hardware current or being phased out?
    • Will it fit? (a 16 GB slice won't fit on a node with only 256 MB free)
  • Quantitative suitability (weighting)
    • State (more states may be eligible for new VMs, but some might be preferred over others (e.g. "active" servers should be preferred over "spare" servers, but if need be, a "spare" server is eligible)
    • How busy is the server doing other things (like building VM's or cleaning up after old ones)?
    • Does the user have other VM's in the same cluster (not a hard rule, because we only have that many clusters)
    • Does the user have other VM's on the same node (not a hard rule, because we only have that many nodes)
      • Does the node have a cached copy of the disk image already?

Do we need "best match", "good enough", or "not too shabby"?

Discussion to include Auctioning and "go get it" as well. ideas:

  • host responds only if valid candidate
  • then least busy wins
  • at launch queue, like coin sorter (more complex)
  • first to answer wins unless large, then least busy
  • host intelligent better because it scales, higher level intelligence does not
  • pluggable

Current Ozone implementation of Server Best Match (SBM):

  • Hard rules
  • Soft rules
  • best weighted winds