| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
This spec proposes to add ability to allow users to use
``Aggregate``'s ``metadata`` to override the global config options
for weights to achieve more fine-grained control over resource
weights.
blueprint: per-aggregate-scheduling-weight
Change-Id: I6e15c6507d037ffe263a460441858ed454b02504
|
|
|
|
| |
Change-Id: I3c139565fc9300449eb25d87dfcc9d4177bc2085
|
|
|
|
|
|
|
|
|
|
| |
If there is only one host available, calculate the weight
make no sense because whatever the weight it is, nova will
use the host.
Closes-Bug: 1448015
Change-Id: I38aed6a6e45d24dc0daf2e96c353f394f3ef5e3f
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Right now, filters/weighers are instantiated on every invocation of the
scheduler. This is both time consuming and unnecessary. In cases where
a filter/weigher tries to be smart and store/cache something in between
invocations this actually prohibits that.
This change make base filter/weigher functions take objects instead of
classes and then let schedulers create objects only once and then reuse
them.
This fixes a known bug in trusted_filter that tries to cache things.
Related to blueprint scheduler-optimization
Change-Id: I3174ab7968b51c43c0711033bac5d4bc30938b95
Closes-Bug: #1223450
|
|
|
|
|
|
|
| |
Using six.add_metaclass instead of '__metaclass__' for Python 3.x
compatibility.
Change-Id: I04848196c8bc553fec19dd447a8fdd6dacdf64b8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The weight system is being used by the scheduler and the cells code.
Currently this system is using the raw values instead of normalizing them.
This makes difficult to properly use multipliers for establishing the
relative importance between two wheighers (one big magnitude could
shade a smaller one). This change introduces weight normalization so
that:
- From an operator point of view we can prioritize the weighers that
we are applying. The only way to do this is being sure that all the
weighers will give a value in a known range, so that it is
not needed to artificially use a huge multiplier to prioritize a
weigher.
- From a weigher developer point of view, somebody willing to implement
one has to care about 1) returning a list of values, 2) setting the
minimum and maximum values where the weights can range, if they are
needed and they are significant for the weighing. For a weigher
developer there are two use cases:
Case 1: Use of a percentage instead of absolute values (for example, %
of free RAM). If we compare two nodes focusing on the percentage of free
ram, the maximum value for the weigher is 100. If we have two nodes one
with 2048 total/1024 free, and the second one 1024 total/512 free they
will get both the same weight, since they have the same % of free RAM
(that is, the 50%).
Case 2: Use of absolute values. In this case, the maximum of the weigher
will be the maximum of the values in the list (in the case above, 1024)
or the maximum value that the magnitude could take (in the case above,
2048). How this maximum is set, is a decision of the developer. He may
let the operator choose the behaviour of the weigher though.
- From the point of view of the scheduler we ensure that it is using
normalized values, and not leveraging the normalization mechanism to the
weighers.
Changes introduced this commit:
1) it introduces weight normalization so that we can apply multipliers
easily. All the weights for an object will be normalized between 0.0 and
1.0 before being sumed up, so that the final weight for a host will be:
weight = w1_multiplier * norm(w1) + w2_multiplier * norm(w2) + ...
2) weights.BaseWeigher has been changed into an ABC so that we enforce
that all weighers have the expected methods.
3) weights.BaseWeigher.weigh_objects() does no longer sum up the
computer weighs to the object, but it rather returns a list that will be
then normalized and added to the existing weight by BaseWeightHandler
4) Adapt the existing weighers to the above changes. Namely
- New 'offset_weight_multiplier' for the cell weigher
nova.cells.weights.weight_offset.WeightOffsetWeigher
- Changed the name of the existing multiplier methods.
5) unittests for all of the introduced changes.
Implements blueprint normalize-scheduler-weights
DocImpact: Now weights for an object are normalized before suming them
up. This means that each weigher will take a maximum value of 1. This
may have an impact for operators that are using more than one weigher
(currently there is only one weigher: RAMWeiger) and for operators using
cells (where we have several weighers). It is needed to review then the
multipliers used and adjust them properly in case they have been
modified.
Docimpact: There is a new configuration option 'offset_weight_multiplier'
in nova.cells.weights.weight_offset.WeightOffsetWeigher
Change-Id: I81bf90898d3cb81541f4390596823cc00106eb20
|
|
|
|
|
|
| |
blueprint fix-nova-typos
Change-Id: I0971b98999381183c0c77fff1d569180606e338b
|
|
|
|
|
|
| |
Update all references of "LLC" to "Foundation".
Change-Id: I009e86784ef4dcf38882d64b0eff484576e04efe
|
|
This makes scheduling weights more plugin friendly and creates shared
code that can be used by the host scheduler as well as the future cells
scheduler. Weighing classes can now be specified much like you can
specify scheduling host filters.
The new weights code reverses the old behavior where lower weights win.
Higher weights are now the winners.
The least_cost module and configs have been deprecated, but are still
supported for backwards compatibility. The code has moved to
nova.scheduler.weights.least_cost and been modified to work with the new
loadable-class code. If any of the least_cost related config options are
specified, this least_cost weigher will be used.
For those not overriding the default least_cost config values, the new
RamWeigher class will be used. The default behavior of the RamWeigher
class is the same default behavior as the old least_cost module.
The new weights code introduces a new config option
'scheduler_weight_classes' which is used to specify which weigher classes
to use. The default is 'all classes', but modified if least_cost
deprecated config options are used, as mentioned above.
The RamWeigher class introduces a new config option
'ram_weight_multiplier'. The default of 1.0 causes weights equal to the
free memory in MB to be returned, thus hosts with more free memory are
preferred (causes equal spreading). Changing this value to a negative
number such as -1.0 will cause reverse behavior (fill first).
DocImpact
Change-Id: I1e5e5039c299db02f7287f2d33299ebf0b9732ce
|