Difference between revisions of "Heat-Monasca-Auto-Scaling"
Kanagaraj M (talk | contribs) |
Kanagaraj M (talk | contribs) |
||
Line 22: | Line 22: | ||
== Mitaka User session == | == Mitaka User session == | ||
In mitaka summit, monasca based auto-scaling is demonstrated and recorded video is available here [4] | In mitaka summit, monasca based auto-scaling is demonstrated and recorded video is available here [4] | ||
− | |||
− | |||
− | |||
− | |||
== Reference == | == Reference == |
Revision as of 05:10, 2 December 2015
Contents
Introduction
In Liberty, heat has introduced the support for the monasca based auto-scaling, which is similar to the ceilometer based auto-scaling in heat. To use this feature, please follow the details given below:
Setup the cloud environment
- Install python-monascaclient [1] in all of the heat-engines, deployed in your cloud environment and restart those heat-engines.
- Install and configure the monasca-agent [2] to monitor the VMs performance metric
Create the auto-scale template
Use the monasca template [3] as reference and update the template with required alarm-definition expression. In the expression, its mandatory to mention the 'scale_group' measurement as :
avg(vm.cpu.utilization_perc{scale_group=XXX})
And the same 'scale_group' should be configured as metadata scale group elements (nova instances) as:
metadata: {"scale_group": XXX}
This will help monasca to aggregate the metrics from all elements in the given scale group and find out whether alarm threshold is reached as defined in the alarm expression. In usual practice, scale_group XXX is filled by stack ID or scale group ID, which gives uniqueness.
Mitaka User session
In mitaka summit, monasca based auto-scaling is demonstrated and recorded video is available here [4]