Jump to: navigation, search

Heat/AutoScaling

< Heat
Revision as of 22:40, 20 August 2013 by Radix (talk | contribs) (some scribbling about the actual AS API. Based on Otter's API and current Heat resources.)

Heat Autoscaling now and beyond

AS == AutoScaling

Now

The AWS AS is broken into a number of logical objects

  • AS group (heat/engine/resources/autoscaling.py)
  • AS policy (heat/engine/resources/autoscaling.py)
  • AS Launch Config (heat/engine/resources/autoscaling.py)
  • Cloud Watch Alarms (heat/engine/resources/cloud_watch.py, heat/engine/watchrule.py)

Dependencies

Note the in template resource dependencies are:

  • Alarm
    • Group
    • Policy
      • Group
        • Launch Config
        • [Load Balancer] - optional

This mean the creation order should be [LB, LC, Group, Policy, Alarm].


Current architecture

When a stack is created with these resources the following happens:

  1. Alarm: the alarm rule is written into the DB
  2. Policy: nothing interesting
  3. LaunchConfig: it is just storage
  4. Group: the Launch config is used to create the initial number of servers.
  5. the new server starts posting samples back to the cloud watch API

When an alarm is triggered in watchrule.py the following happens:

  1. the periodic task runs the watch rule
  2. when an alarm is triggered it calls (python call) the policy resource (policy.alarm())
  3. the policy figures out if it needs to adjust the group size, if it does it calls (via python again) group.adjust()

Beyond

The following blueprint and its dependents is currently *out of date* and does not reflect the design laid out in this document: https://blueprints.launchpad.net/heat/+spec/heat-autoscaling

Use Cases

  1. Users want to use AutoScale without using Heat templates.
  2. Users want to use AutoScale *with* Heat templates.
  3. Administrators or automated processes want to add or remove *specific* instances from a scaling group. (one node was compromised or had some critical error?)


General Ideas

  • Implement scaling groups, policies, and monitoring integration in a separate API
  • That separate API will be usable by end-users directly, or via Heat templates.
  • That API will create a Heat template and its own Heat stack whenever an AutoScalingGroup is created within it.
  • As events happen which trigger a policy that changes the number of instances in a scaling group, the AutoScale API will generate a new template, and update-stack the stack that it manages.

AutoScaling

AutoScaling will be delegated to a service external to Heat (but implemented inside the Heat project/codebase). It will be responsible for AutoScalingGroups and ScalingPolicies. Monitoring services (e.g. Ceilometer) will communicate with the AutoScaling service to execute policies, and the AutoScaling service will execute those policies by updating a stack in Heat.

The communication is as thus:

  • When AutoScaling resources are created in Heat, they will register the data with the AutoScaling service via POSTs to its API. This includes the AutoScalingGroup and the ScalingPolicy.
  • When Ceilometer (or any other monitoring service) hits an AutoScaling webhook, the AutoScaling service will execute the associated policy (unless it's on cooldown).
  • During policy execution, the AutoScaling service will talk to Heat to manipulate the stack that lives within Heat.

Using AutoScale from Heat temlates

The following (new) resources will do the following things.

  • OS::Heat::AutoScalingGroup: Invokes the AS API to create a group.
  • OS::Heat::AutoScalingPolicy: Invokes the AS API to create a policy.
  • OS::Heat::AutoScalingAlarm: Invokes the Ceilometer API to register an AutoScalingPolicy webhook with a new alarm. It is passed a webhook pointing to the AutoScalingPolicy.

When an alarm is triggered in Ceilometer the following happens:

  1. Ceilometer will invoke the webhook associated with the alarm (served by the AS API)
  2. the AS Policy figures out if it needs to adjust the group size, if it does, it updates the internal Heat template and posts an update-stack on the stack that it manages.

The AutoScaling API

  • ScalingGroup:
 * POST: Creates a scaling group given the following:
   * name
   * max_size
   * min_size
   * cooldown
   * resource_snippet: The resource that will be duplicated in order to scale
  • ScalingPolicy:
 * POST: Creates a scaling policy given the following:
   * name
   * cooldown
   * change: a number that has an effect based on change_type.
   * change_type: one of "change_in_capacity", "percentage_change_in_capacity", or "exact_capacity" -- describes what this policy does (and the meaning of "change")
   * trigger: json sub-object that describes how this policy will be executed.
     * type: webhook / ceilometer_alarm / schedule
     * extra attributes based on the type. ("schedule" will require a scheduling specification, "ceilometer_alarm" will specify how to configure the alarm, etc).

Thoughts: maybe the "trigger" should be separate from a ScalingPolicy entirely? For example, imagine a scaling policy that just defines the change and change type, and being able to create a trigger that references that policy (webhooks, schedulers, alarms) as separate REST objects.

Authentication

  • how do we authenticate the request from ceilometer to AS?
  • is this a special unprivileged user "ceilometer-alarmer" that we trust?
  • The AS API should have access to a Trust for the user who owns the resources it manages, and pass that Trust to Heat.

Securing Webhooks

Many systems just treat the webhook URL as a secret (with a big random UUID in it, generated *per client*). I think think this is actually fine, but it has two problems we can easily solve:

  • there are lots of places other than the actual SSL stream that URLs can be seen. Logs of the Autoscale HTTP server, for example.
  • it's susceptible to replay attacks (if sniff one request, you can send the same request to keep doing the same operation, like scaling up or down)


The first one is easy to solve by putting some important data into the POST body. The second one can be solved with a nonce with timestamp component.

The API for creating a webhook in the autoscale server should return two things, the webhook URL and a random signing secret. When Ceilometer (or any client) hits the webhook URL, it should do the following:

  • include a "timestamp" argument with the current timestamp
  • include another random nonce
  • sign the request with the signing secret

(to solve the first problem from above, the timestamp and nonce should be in the POST request body instead of the URL)

And anytime the AS service receives a webhook it should:

  • verify the signature
  • ensure that the timestamp is reasonably recent (no more than minutes old, and no more than minutes into the future)
  • check to see if the timestamp+nonce has been used recently (we only need to store the nonces used within that "reasonable" time window)


On top of all of this, of course, webhooks should be revokable.


[Qu] if we do this in the context of Heat (db not accessible from the API daemon).

  1. We are going to have to send all webhooks to the heat-engine for verification.
  2. This is because we can't check the uuid in the API, thus making it very easy for a DOS attack. Any idea on how to solve this?

[An] This doesn't sound like a unique problem, which should be solved by rate limiting, as other parts of OpenStack do.

[Qu] Why make Autoscale a separate service?

[An] To clarify, service == REST server (to me)

Initially because someone wanted it separate (rackers). But I think it is the right approach long term.

Heat should not be in the business of implementing too many services internally, but rather having resources to orchestrate them.

monitoring <> Xaas.policy <> heat.resource.action()

Some cool things we could do with this:

  1. better instance HA (restarting servers when they are ill) - and smarter logic defining what is "ill"
  2. autoscaling
  3. energy saving (could be linked to autoscaling)
  4. automated backup (calling snapshots at regular time periods)
  5. autoscaling using shelving? (maybe for faster response)

I guess we could put all this into one service (an all purpose policy service)?