Jump to: navigation, search

Difference between revisions of "Monasca/Events"

(Stream Definition)
(News)
 
(7 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 +
== News ==
 +
<big>July 2018</big> - Work is continuing on Monasca Events with a revision to the architecture and API.  Below information is from before Jan 2017 and back to 2015, but look for upcoming updates as work progresses.
 +
 +
 +
------
 +
 
== Introduction ==
 
== Introduction ==
Real-time event stream processing in Monasca is work in progress that will allow events from external data sources to be sent to the Monasca API where they can be transformed, stored, queried, filtered, grouped and associated with notification methods. Streams can be defined by filtering and grouping events using fields in the event. Notification methods can be associated with streams that are invoked when fire and expire conditions occur, similar to how actions are associated with alarms.
+
Real-time event stream processing in Monasca is work in progress that will allow events from external data sources to be sent to the Monasca API where they can be transformed, stored, queried, filtered, grouped and associated with handlers. Streams can be defined by filtering and grouping events using fields in the event. Handlers can be associated with stream definitions that fire when the stream trigger occurs.
 +
 
 +
Example of OpenStack events are: instance creation and instance delete. Specific action can follow these events. For example if an instance is deleted then all the alarms related to this instance should be deleted too.
  
An example use case is to send OpenStack "compute.instance.create.*" events (see https://wiki.openstack.org/wiki/NotificationEventExamples) to the API. A transform on the events could be defined that reduces the number of supplied fields in the source event to a more manageable number of fields as well as normalizing the data. An event stream can be creating by defining a filter to select all "compute.instance.create.*" events and group them by a set of fields in the event, such as "instance_id". When the "compute.instance.create.end" event occurs a fire criteria can be invoked that sends the event stream to the notification methods that have been registered. If the notification method is a web hook additional processing on the collected event stream can occur, such as calculating the duration between the "compute.instance.create.start" and "compute.instance.create.end" event from which new events and metrics can be created.
+
An example use case is to send OpenStack "compute.instance.create.*" events (see https://wiki.openstack.org/wiki/NotificationEventExamples) to the API. A transform on the events could be defined that reduces the number of supplied fields in the source event to a more manageable number of fields as well as normalizing the data. An event stream can be creating by defining a filter to select all "compute.instance.create.*" events and group them by a set of fields in the event, such as "instance_id". When the "compute.instance.create.end" event occurs a fire criteria can be invoked that processes the stream.
  
 
== Events API ==
 
== Events API ==
Line 24: Line 32:
 
** fire_actions ([string(50)], optional) - Array of notification method IDs that are invoked when the pipeline fires.
 
** fire_actions ([string(50)], optional) - Array of notification method IDs that are invoked when the pipeline fires.
 
** expire_actions ([string(50)], optional) - Array of notification method IDs that are invoked when the pipeline expires.
 
** expire_actions ([string(50)], optional) - Array of notification method IDs that are invoked when the pipeline expires.
** error_actions ([string(50)], optional) - Array of notification method IDs that are invoked when the pipeline has an error.
 
 
* GET /v2.0/events/stream-definitions
 
* GET /v2.0/events/stream-definitions
 
* GET /v2.0/events/stream-definitions/{stream-definition-id}
 
* GET /v2.0/events/stream-definitions/{stream-definition-id}
Line 44: Line 51:
 
** Supply pipeline definitions in methods, not yaml files. Winchester currently reads the pipeline configuration information from yaml files at start-up time.
 
** Supply pipeline definitions in methods, not yaml files. Winchester currently reads the pipeline configuration information from yaml files at start-up time.
 
* Create pipeline handler that publishes notification events such that the Notification Engine can consume them.
 
* Create pipeline handler that publishes notification events such that the Notification Engine can consume them.
 
== Notification Engine ==
 
Needs to be able to consume general events from the Threshold Engine or Winchester Pipeline Handler.
 
  
 
== Threshold Engine ==
 
== Threshold Engine ==
Line 54: Line 58:
 
* Initialize Winchester schemas
 
* Initialize Winchester schemas
 
* Initialize Monasca transforms and pipeline schemas.
 
* Initialize Monasca transforms and pipeline schemas.
 
== Demo ==
 
Look into creating a demo of a pipeline handler using Iron.IO.
 

Latest revision as of 16:21, 8 May 2019

News

July 2018 - Work is continuing on Monasca Events with a revision to the architecture and API. Below information is from before Jan 2017 and back to 2015, but look for upcoming updates as work progresses.



Introduction

Real-time event stream processing in Monasca is work in progress that will allow events from external data sources to be sent to the Monasca API where they can be transformed, stored, queried, filtered, grouped and associated with handlers. Streams can be defined by filtering and grouping events using fields in the event. Handlers can be associated with stream definitions that fire when the stream trigger occurs.

Example of OpenStack events are: instance creation and instance delete. Specific action can follow these events. For example if an instance is deleted then all the alarms related to this instance should be deleted too.

An example use case is to send OpenStack "compute.instance.create.*" events (see https://wiki.openstack.org/wiki/NotificationEventExamples) to the API. A transform on the events could be defined that reduces the number of supplied fields in the source event to a more manageable number of fields as well as normalizing the data. An event stream can be creating by defining a filter to select all "compute.instance.create.*" events and group them by a set of fields in the event, such as "instance_id". When the "compute.instance.create.end" event occurs a fire criteria can be invoked that processes the stream.

Events API

Events

  • POST /v2.0/events: Publish an event.
  • GET /v2.0/events/{event_id}: Get an event with the specific event ID.
  • GET /v2.0/events: List events.

Transforms

  • POST /v2.0/events/transforms - Create a transform
  • GET /v2.0/events/transforms - List transforms
  • GET /v2.0/events/transforms/{transform_id} - Get the specified transform
  • DELETE /v2.0/events/transforms/{transform_id}

Stream Definition

  • POST /v2.0/events/stream-definitions: Creates a stream definition with the following parameters in the JSON body
    • name (string(255), required) - A unique name for the stream.
    • description (string(255), optional) - A description of the stream.
    • select () - Fields of events to filter/match on. For example select all events where the field "event_type" matches "compute.instance.create.*".
    • group_by () - Fields of events to group on. For all events that match the select criteria group them by the specified criteria. For example, group events by the field "instance_id" or "user_id".
    • expires (int, required) - Elapsed time in milliseconds from the start of a stream to when the expire actions are invoked if the fire actions haven't occurred yet.
    • fire_actions ([string(50)], optional) - Array of notification method IDs that are invoked when the pipeline fires.
    • expire_actions ([string(50)], optional) - Array of notification method IDs that are invoked when the pipeline expires.
  • GET /v2.0/events/stream-definitions
  • GET /v2.0/events/stream-definitions/{stream-definition-id}
  • DELETE /v2.0/events/stream-definitions/{stream-definition-id}

Transformation Engine

Consumes events from Kafka, transforms them, and publishes to Kafka.

Event Engine

Consumes transformed events from Kafka, and uses the Winchester pipeline to process them.

Distiller

No changes required.

Winchester

  • Add Support for multi-tenancy
  • Dynamically update pipelines.
  • Add and delete pipeline definitions at run-time. Currently, the Winchester pipelines needs to be created at start-up time.
    • Supply pipeline definitions in methods, not yaml files. Winchester currently reads the pipeline configuration information from yaml files at start-up time.
  • Create pipeline handler that publishes notification events such that the Notification Engine can consume them.

Threshold Engine

Update to generate more general alarm state transition events.

MySQL

  • Initialize Winchester schemas
  • Initialize Monasca transforms and pipeline schemas.