Jump to: navigation, search

Ceilometer/blueprints/event-triggers

Introduction:

Triggers are designed to make it easy to aggregate data from events that occur over time. Many processes in OpenStack will create Notifications as they proceed. These will be represented as Events within Ceilometer. Such processes may take place over a long period of time, possibly hours, and the notifications may be emitted from many different OpenStack nodes, and even different systems within OpenStack, such as Compute, or Image storage. They may be processed by multiple different Collector nodes in a Ceilometer installation, and due to the trade-offs of reliable messaging within AMQP, may not be received in order.

In order to properly aggregate statistics from these sets of events, we need to collect a set of events we are interested in over time, once we have the ones we need, we need to order them by their timestamps, and process them as a batch. If we never receive all of the events we need, such as when there is a failure in the process we are trying to gather information on, we will need to expire the list we have collected, and possibly perform alternate processing, such as updating failure statistics.

Triggers accomplish this. They allow for collecting a list of relevant events over a period of time, detecting when we have all of the events we need, properly ordering them, then processing them using an event pipeline (as defined in the Event Pipelines blueprint: https://blueprints.launchpad.net/ceilometer/+spec/notification-pipelines). They also allow for processing of what events we may have collected through an alternate event pipeline if we fail to collect all of the events we need in an appropriate period of time.


Terminology:

Trigger definition:

A configuration for a class of triggers. The definition acts as a template that trigger instances are created from. It defines the name, distinguished_by traits, pipelines, expiration, matching criteria, etc.

Trigger instance:

A specific instance of a trigger, persisted in the datastore, with a list of events. It has:

  • first_event: The timestamp of the earliest event in the event list.
  • last_event: The timestamp of the latest event in the event list. This is updated when a new event is matched.
  • expire_timestamp: The actual time when this trigger instance will expire. It is derived from either the first_event timestamp or the last_event timestamp via the time expression set in the definition's expiration field. It is updated if the timestamp it's based on changes.
  • fire_timestamp: The actual time when the trigger will fire. This is set based on the current time, plus any fire_delay, if the trigger's firing criteria are met.
  • name: The name of the trigger from it's definition.
  • state: Triggers can be in ACTIVE, READY_TO_FIRE, READY_TO_EXPIRE, FIRING, EXPIRING, ERROR, or EXPIRE_ERROR states.
  • and list of distinguished_by traits and values.

Time expression:

A simple DSL, like a regex for time comparisons. Time expressions can be absolute like first@00:00 (meaning midnight on the day of the first event), or relative, like last+3h (3 hours after the last event). They can be based on the last event's timestamp or the first event's timestamp. These can be used to set trigger expiration, or matching values for datetime traits. For matches, you can use a single time expression to match a specific timestamp, or a time range (which is just two time expressions separated by “to”, such as “first@00:00 to last+2h” )

Criteria:

A description of a pattern to match events against. These are basically a hash which includes: event_type, which can be wildcarded, timestamp, which can be matched with a time expression, and a list of trait names and values, which can be compared to constants, or expressions. Fire criteria can also have number, which is the number of events matching a given criteria that must be in the trigger's list of events for it to fire. Criteria are used for: match_criteria, which determine what events to collect, fire_criteria, which determine when we have the events we need to fire, and load_criteria, which describe historical events we may need to load from the datastore.

Event pipeline:

A pipeline for processing events (see https://blueprints.launchpad.net/ceilometer/+spec/notification-pipelines) Once we have the needed events, the event pipelines do the actual processing. Triggers can send events to a pipeline listed as a fire_pipeline, when the triggers fire criteria is matched, or optionally, an expire_pipeline, if the trigger expires without firing.


Defining the triggers:

A trigger definition has the following fields:

  • name: A unique name to identify the trigger
  • distinguished_by: A list of traits which distinguish this trigger. A unique trigger instance (with it's own list of events) is created for each unique combination of distinguishing traits. If trigger “instance_create” is distinguished by instance_id, then there will be a separate “instance_create" trigger instance created for each unique instance_id in events that match the trigger's criteria.
  • expiration: Time expression, sets when the trigger expires. can be specified relative to the first (earliest) event in the event list or the last (latest) For example, if the expiration is set to "first+4h" , the trigger will expire 4 hours after the first event matching its match criteria is received. If it's set to "last+2h", the trigger will expire 2 hours after the last event matching it's criteria is received (and it's expiration clock will be reset every time it receives an event)
  • fire_delay: Number of seconds to wait after the trigger is ready to fire, before it actually fires. (This allows collection of any out-of-order 'straggler' events.) Defaults to 0
  • match_criteria: A list of criteria for events to match. Each matched event will be placed on the trigger's event list. An event matches if it matches any of the criteria. Each criteria consists of:
  • event_type: Type of event to match. May be wild carded using glob-style wildcards.
  • timestamp: Time range of event timestamp to match
  • traits: Hash of traits to match. hash is trait_name: value. Value may be a constant or an expression, such as:
  • "> 10"
  • "< 1"
  • "is NULL" (event does NOT have the trait)
  • "is not NULL" (event has the trait, regardless of value)
  • a time expression (for date time traits)
  • a time range (also for date time traits)
    The distinguished_by traits for the trigger are automatically added to each criterion's traits. To match a given criterion, all of an the traits specified must match.
  • fire_criteria: Describes what events must be present in the trigger's event list for the trigger to fire. Similar to the match criteria, but fire criteria has one additional option:
  • number: There must be at least this many events that match this criterion for the trigger to fire. Defaults to 1.
  • load_criteria: Additional, historical, events may be loaded from the datastore prior to the trigger's running of it's event pipelines, (either firing or expiring). The syntax is the same as for match_criteria. These events will be added to the trigger's event list just before running the pipeline(s)
  • fire_pipeline: Name of an event pipeline to send the events to when the trigger fires. The events in the event list are sorted according to timestamp, and sent to that event pipeline
  • expire_pipeline: An optional name of a pipeline that events will be sent to if the trigger expires. If such a pipeline is listed, then events are sent to that pipeline when the trigger's expiration time has passed.

Trigger Flow:

There are four main processes to the trigger system: Processing incoming events, time-based execution for trigger firing and expiration, running the fire pipeline for firing triggers, and running the expire pipeline for expiring triggers.


Incoming Event:

An Event comes into the system, either from a converted Notification, or from other systems (such as Alarms) that may generate Events. It is passed to the Dispatchers, and the DB dispatcher persists the event to the datastore (rejecting duplicates by message_id along the way). It is then passed to the Trigger Manager. The trigger manager checks the event against the list of trigger definitions to see if the event matches any. For each definition the event matches, the trigger manager then determines the set of distinguishing trait values from that event, and checks the datastore to see if there is an existing, non-expired trigger instance in the ACTIVE state that matches. If there is no existing trigger instance, one will be created in the ACTIVE state and persisted to the datastore, with it's distinguishing trait values and first event timestamp derived from the Event. In either case, the Event's id is then added to that trigger instance's event list, and the trigger instance's last event timestamp is set from the event's timestamp. The trigger instance's expire_timestamp will then be (re)calculated based on it's first_event or last_event timestamp, as appropriate.


Afterwords, any trigger instances created/changed by the event, that do not already have their fire_timestamp set, are checked to see if their firing criteria is met. For any that have met their firing criteria, the trigger instance's fire_timestamp is set to the current time, plus any fire delay period. Firing of the triggers is done through a timestamp in order to allow for a firing delay to collect any out-of order events.


Time-Based Trigger Firing/Expiration:

The fire/expire job is run as a background thread in the Collector. (or it could be a separate process). It performs two tasks in a loop indefinitely.


First it fires triggers whose fire_timestamp has past. A limited sized batch of triggers whose state is ACTIVE and whose fire_timestamp is less than the current time are loaded from the datastore. The list of triggers is iterated in random order, each one is set to the READY_TO_FIRE state, and an rpc cast is made to the collector's fire_trigger method with the trigger's id. An rpc cast is made to spread the work across multiple workers.


Similarly, it then expires triggers whose expire time has past. A batch of triggers whose state is ACTIVE, and whose expire time has past (and that have a null fire_timestamp), are loaded, and iterated in random order. Each is set to the READY_TO_EXPIRE state, and an rpc cast to expire_trigger with the trigger's id is made.


If neither of the jobs two tasks has anything to process, it sleeps for a short period (a second or so). The job is designed so that multiple jobs can run simultaneously, for scaling. Because the trigger states use a mechanism to synchronize between workers (see Trigger States below), the jobs will not step on each other. Also, the methods called via rpc are deliberately designed so that calling expire or fire multiple times on the same trigger will not run its fire or expiration more than once.


Firing trigger:

An rpc request comes in to fire a given trigger instance. The trigger instance is loaded from the datastore, and it is checked to make sure it is in a READY_TO_FIRE state. (If not, the request is ignored). The trigger instance is set to a FIRING state. All of the events in the trigger's event list are loaded from the datastore. If there is any load criteria, the criteria is used to construct a query to the datastore to load additional events. The combined list of events is sorted by timestamp, earliest to latest, and any duplicates removed. The firing pipeline is loaded by the Event Pipeline manager, and the combined list of events is sent to that pipeline. Once the events are successfully finished processing through the pipeline, the trigger instance is deleted. If the pipeline throws an error while processing, the error is logged, and the trigger instance is set to an ERROR state. (An error in the pipeline is likely due to a code error, and should be investigated to file a bug.)


Expiring Trigger:

An rpc request is received to expire a given trigger instance. The trigger instance is loaded from the datastore, and checked to make sure it is in the READY_TO_EXPIRE state. (if not, the request is ignored). The trigger is set to the EXPIRING state. If there is an expire pipeline, events are loaded from the event list and any load_criteria. The event loading proceeds the same as when firing, and the events are sent to the expire pipeline. If there is an error, the trigger is set to an EXPIRE_ERROR state. If the trigger is not in an error state, it is then deleted.


Note: Trigger States:

The trigger instance's state is synchronized to prevent multiple workers stepping on each other. This can be done with a simple optimistic locking protocol in the datastore, or through some form of external atomic state machine if a suitable one is available.