Jump to: navigation, search

Difference between revisions of "Monasca/Events"

(Pipelines)
(Stream Definition)
Line 18: Line 18:
 
** name (string(255), required) - A unique name of the pipeline. Note, the name must be unique.
 
** name (string(255), required) - A unique name of the pipeline. Note, the name must be unique.
 
** description (string(255), optional) - A description of the stream definition.
 
** description (string(255), optional) - A description of the stream definition.
** match_by - Fields of event to match on.  
+
** match_by - Fields of events to filter/match on. For example all "compute.instance.create.*" events
** distinguish_by - Fields of events to distinguish on. For example, "instance_id" or "user_id".
+
** group_by - Fields of events to group on. For all events that match the match_by criteria group them by the specified criteria. For example, group events by "instance_id" or "user_id".
 
** expires - Time in milliseconds that a stream expires and the expire actions are invoked.
 
** expires - Time in milliseconds that a stream expires and the expire actions are invoked.
 
** fire_actions ([string(50)], optional) - Array of notification method IDs that are invoked when the pipeline fires.
 
** fire_actions ([string(50)], optional) - Array of notification method IDs that are invoked when the pipeline fires.

Revision as of 23:22, 2 February 2015

Introduction

Events processing in Monasca will be capable of receiving receiving incoming events, transforming them, storing them and processing them. The processing consists of defining filters on events and grouping them together based on fields in the event. Fire and expire conditions can be defined that results in notifications being invoked, similar to how actions are defined for alarm definitions.

An example use case is to send OpenStack "instance" events to the API. A transform on the events would be defined that reduces the number of supplied fields to a more meaningful number and normalizes the data.

Events API

Events

  • POST /v2.0/events: Publish an event.
  • GET /v2.0/events/{event_id}: Get an event with the specific event ID.
  • GET /v2.0/events: List events.

Transforms

  • POST /v2.0/transforms: POST a transform
  • GET /v2.0/transforms: List transform
  • GET /v2.0/transforms/{transform_id}
  • DELETE /v2.0/transforms/{transform_id}

Stream Definition

  • POST /v2.0/stream-definitions: Creates a stream definition with the following parameters in the JSON body
    • name (string(255), required) - A unique name of the pipeline. Note, the name must be unique.
    • description (string(255), optional) - A description of the stream definition.
    • match_by - Fields of events to filter/match on. For example all "compute.instance.create.*" events
    • group_by - Fields of events to group on. For all events that match the match_by criteria group them by the specified criteria. For example, group events by "instance_id" or "user_id".
    • expires - Time in milliseconds that a stream expires and the expire actions are invoked.
    • fire_actions ([string(50)], optional) - Array of notification method IDs that are invoked when the pipeline fires.
    • expire_actions ([string(50)], optional) - Array of notification method IDs that are invoked when the pipeline expires.
  • GET /v2.0/stream-definitions
  • GET /v2.0/stream-definitions/{stream-definition-id}
  • DELETE /v2.0/stream-definitions/{stream-definition-id}

Transformation Engine

Consumes events from Kafka, transforms them, and publishes to Kafka.

Event Engine

Consumes transformed events from Kafka, and uses the Winchester pipeline to process them.

Distiller

No changes required.

Winchester

  • Add Support for multi-tenancy
  • Dynamically update pipelines.
  • Add and delete pipeline definitions at run-time. Currently, the Winchester pipelines needs to be created at start-up time.
    • Supply pipeline definitions in methods, not yaml files. Winchester currently reads the pipeline configuration information from yaml files at start-up time.
  • Create pipeline handler that publishes notification events such that the Notification Engine can consume them.

Notification Engine

Needs to be able to consume general events from the Threshold Engine or Winchester Pipeline Handler.

Threshold Engine

Update to generate more general alarm state transition events.

MySQL

  • Initialize Winchester schemas
  • Initialize Monasca transforms and pipeline schemas.

Demo

Look into creating a demo of a pipeline handler using Iron.IO.