Jump to: navigation, search

Scheduler/NormalizedWeights

Revision as of 13:06, 10 June 2013 by Aloga (talk | contribs)

Overview

Weighs were originally used in the scheduler, but they are also used in the cells code. This is still applicable to them also.

Currently we are using the raw values directly from the available weighers, instead of normalizing them. This makes difficult to properly use multipliers for establishing the relative importance between two weighters (one big magnitude could shade a smaller one).

For example, a weigher that is returning 0 and 1 as values will need a multiplier that is big enough to be taking into account with regard to the RAM weigher, that would return higher values. Some workarounds are the usage of big enough multipliers for each weigher, or using a large enough weight value. However, even if a system is properly configured, a RAM upgrade would imply a review and reconfiguracion of the weighers.

This blueprint aims to introduce weight normalization so that we can apply multiple weighers easily. All the weights will be normalized between 0.0 and 1.0, so that the final weight for a host will be as follows:

   weight = w1_multiplier * norm(w1) + w2_multiplier * norm(w2) + ...

This way it is easier for a resource provider to configure and establish the importance between the weighers.

Two kinds of normalization will be provided:

  • If the weigher specifies the upper and lower values, the weighed objects will be normalized with regard to these values.
  • In the case that the weigher does not supply the lower and upper limits for the weighed objects, the maximum and minimum values from the weighed objects will be used.

Examples

New weigher returing 0 and 1

Currently the RamWeigher returns the available RAM in MB as the weight for a node. Assume that we are implementing a weigher that will return 0 and 1 values. If we have nodes with some GB of RAM, we should put a multiplier big enough to make the new weigher influence the final weight. If we upgrade the nodes to some hundreds of GB of RAM, we readapt also the weigher. If we have an heterogeneous cluster, we should study carefully how we can make this weigher really count.

Cells RamByInstanceTypeWeigher and MuteChildWeigher

The MuteChildWeigher is a good example of inflating a weight so that it can be significant enough so as to compete with the RamByInstanceTypeWeigher. It has defined a 'mute_weight_value=1000.0' and a 'mute_weight_multiplier=-10.0' so that it can really influence the final weight. However, if we introduce some nodes with a lager RAM, we should need to reconfigure it.

Comments

See commends in https://review.openstack.org/#/c/27160/ :