Jump to: navigation, search

Difference between revisions of "Scheduler/NormalizedWeights"

Line 10: Line 10:
 
instead of normalizing them. This makes difficult to properly use multipliers
 
instead of normalizing them. This makes difficult to properly use multipliers
 
for establishing the relative importance between two weighters (one big magnitude
 
for establishing the relative importance between two weighters (one big magnitude
could shade a smaller one).
+
will shade a smaller one).
  
 
For example, a weigher that is returning 0 and 1 as values will need a multiplier
 
For example, a weigher that is returning 0 and 1 as values will need a multiplier
that is big enough to be taking into account with regard to the RAM weigher, that
+
that is big enough, so that it is taken into account with regard to the RAM weigher,
would return higher values. Some workarounds are the usage of big enough
+
that would return higher values (i.e. free RAM). The current workaround is the
multipliers for each weigher, or using a large enough weight value. However,
+
usage of big enough multipliers for each weigher, or using a large enough weight
even if a system is properly configured, a RAM upgrade would imply a
+
value. However, even if a system is properly configured, a RAM upgrade would
review and reconfiguracion of the weighers.
+
imply a review and reconfiguracion of the weighers (if the infrastructure is
 +
heterogeneous, the problem is even worse).
  
 
This blueprint aims to introduce weight normalization so that we can apply
 
This blueprint aims to introduce weight normalization so that we can apply
Line 26: Line 27:
  
 
This way it is easier for a resource provider to configure and establish the
 
This way it is easier for a resource provider to configure and establish the
importance between the weighers.
+
importance between the weighers, since the operator knows well in advance that
 +
the maximum value that a weigher will return is 1.0 and the minimum is 0.0.
  
 
Two kinds of normalization will be provided:
 
Two kinds of normalization will be provided:
Line 32: Line 34:
 
* If the weigher specifies the upper and lower values, the weighed objects will be normalized with regard to these values.
 
* If the weigher specifies the upper and lower values, the weighed objects will be normalized with regard to these values.
 
* In the case that the weigher does not supply the lower and upper limits for the weighed objects, the maximum and minimum values from the weighed objects will be used.
 
* In the case that the weigher does not supply the lower and upper limits for the weighed objects, the maximum and minimum values from the weighed objects will be used.
 +
 +
Moreover, the RamWeigher used the multiplier to change its beahaviour. To
 +
change the "spreading" behaviour to "stacking" an operator has to put a
 +
negative multiplier. Hoever, if somebody is using several weighers this won't
 +
be true.
  
 
== Examples ==
 
== Examples ==
Line 39: Line 46:
 
Currently the RamWeigher returns the available RAM in MB as the weight for a
 
Currently the RamWeigher returns the available RAM in MB as the weight for a
 
node. Assume that we are implementing a weigher that will return 0 and 1
 
node. Assume that we are implementing a weigher that will return 0 and 1
values. If we have nodes with some GB of RAM, we should put a multiplier big
+
values (for example true or false for a cached instance image). If we have
enough to make the new weigher influence the final weight. If we upgrade the
+
nodes with some GB of RAM, we should put a multiplier big enough to make the
nodes to some hundreds of GB of RAM, we readapt also the weigher. If we have
+
new weigher influence the final weight. If we upgrade the nodes to some hundreds
an heterogeneous cluster, we should study carefully how we can make this
+
of GB of RAM, we readapt also the weigher. If we have an heterogeneous cluster,
weigher really count.
+
we should study carefully how we can make this weigher really count.
  
 
=== Cells RamByInstanceTypeWeigher and MuteChildWeigher ===
 
=== Cells RamByInstanceTypeWeigher and MuteChildWeigher ===
Line 52: Line 59:
 
that it can really influence the final weight. However, if we introduce some
 
that it can really influence the final weight. However, if we introduce some
 
nodes with a lager RAM, we should need to reconfigure it.
 
nodes with a lager RAM, we should need to reconfigure it.
 
== Comments ==
 
See commends in https://review.openstack.org/#/c/27160/ :
 

Revision as of 16:15, 9 December 2013

Overview

Weighs were originally used in the scheduler, but they are also used in the cells code. This is still applicable to them also.

Currently we are using the raw values directly from the available weighers, instead of normalizing them. This makes difficult to properly use multipliers for establishing the relative importance between two weighters (one big magnitude will shade a smaller one).

For example, a weigher that is returning 0 and 1 as values will need a multiplier that is big enough, so that it is taken into account with regard to the RAM weigher, that would return higher values (i.e. free RAM). The current workaround is the usage of big enough multipliers for each weigher, or using a large enough weight value. However, even if a system is properly configured, a RAM upgrade would imply a review and reconfiguracion of the weighers (if the infrastructure is heterogeneous, the problem is even worse).

This blueprint aims to introduce weight normalization so that we can apply multiple weighers easily. All the weights will be normalized between 0.0 and 1.0, so that the final weight for a host will be as follows:

   weight = w1_multiplier * norm(w1) + w2_multiplier * norm(w2) + ...

This way it is easier for a resource provider to configure and establish the importance between the weighers, since the operator knows well in advance that the maximum value that a weigher will return is 1.0 and the minimum is 0.0.

Two kinds of normalization will be provided:

  • If the weigher specifies the upper and lower values, the weighed objects will be normalized with regard to these values.
  • In the case that the weigher does not supply the lower and upper limits for the weighed objects, the maximum and minimum values from the weighed objects will be used.

Moreover, the RamWeigher used the multiplier to change its beahaviour. To change the "spreading" behaviour to "stacking" an operator has to put a negative multiplier. Hoever, if somebody is using several weighers this won't be true.

Examples

New weigher returing 0 and 1

Currently the RamWeigher returns the available RAM in MB as the weight for a node. Assume that we are implementing a weigher that will return 0 and 1 values (for example true or false for a cached instance image). If we have nodes with some GB of RAM, we should put a multiplier big enough to make the new weigher influence the final weight. If we upgrade the nodes to some hundreds of GB of RAM, we readapt also the weigher. If we have an heterogeneous cluster, we should study carefully how we can make this weigher really count.

Cells RamByInstanceTypeWeigher and MuteChildWeigher

The MuteChildWeigher is a good example of inflating a weight so that it can be significant enough so as to compete with the RamByInstanceTypeWeigher. It has defined a 'mute_weight_value=1000.0' and a 'mute_weight_multiplier=-10.0' so that it can really influence the final weight. However, if we introduce some nodes with a lager RAM, we should need to reconfigure it.