Jump to: navigation, search

Monasca/Events

< Monasca
Revision as of 20:27, 13 July 2018 by Joseph Davis (talk | contribs) (news of work)

News

July 2018 - Work is continuing on Monasca Events with a revision to the architecture and API. Below information is from before Jan 2017, but look for upcoming updates as work progresses.

Introduction

Real-time event stream processing in Monasca is work in progress that will allow events from external data sources to be sent to the Monasca API where they can be transformed, stored, queried, filtered, grouped and associated with handlers. Streams can be defined by filtering and grouping events using fields in the event. Handlers can be associated with stream definitions that fire when the stream trigger occurs.

Example of OpenStack events are: instance creation and instance delete. Specific action can follow these events. For example if an instance is deleted then all the alarms related to this instance should be deleted too.

An example use case is to send OpenStack "compute.instance.create.*" events (see https://wiki.openstack.org/wiki/NotificationEventExamples) to the API. A transform on the events could be defined that reduces the number of supplied fields in the source event to a more manageable number of fields as well as normalizing the data. An event stream can be creating by defining a filter to select all "compute.instance.create.*" events and group them by a set of fields in the event, such as "instance_id". When the "compute.instance.create.end" event occurs a fire criteria can be invoked that processes the stream.

Events API

Events

  • POST /v2.0/events: Publish an event.
  • GET /v2.0/events/{event_id}: Get an event with the specific event ID.
  • GET /v2.0/events: List events.

Transforms

  • POST /v2.0/events/transforms - Create a transform
  • GET /v2.0/events/transforms - List transforms
  • GET /v2.0/events/transforms/{transform_id} - Get the specified transform
  • DELETE /v2.0/events/transforms/{transform_id}

Stream Definition

  • POST /v2.0/events/stream-definitions: Creates a stream definition with the following parameters in the JSON body
    • name (string(255), required) - A unique name for the stream.
    • description (string(255), optional) - A description of the stream.
    • select () - Fields of events to filter/match on. For example select all events where the field "event_type" matches "compute.instance.create.*".
    • group_by () - Fields of events to group on. For all events that match the select criteria group them by the specified criteria. For example, group events by the field "instance_id" or "user_id".
    • expires (int, required) - Elapsed time in milliseconds from the start of a stream to when the expire actions are invoked if the fire actions haven't occurred yet.
    • fire_actions ([string(50)], optional) - Array of notification method IDs that are invoked when the pipeline fires.
    • expire_actions ([string(50)], optional) - Array of notification method IDs that are invoked when the pipeline expires.
  • GET /v2.0/events/stream-definitions
  • GET /v2.0/events/stream-definitions/{stream-definition-id}
  • DELETE /v2.0/events/stream-definitions/{stream-definition-id}

Transformation Engine

Consumes events from Kafka, transforms them, and publishes to Kafka.

Event Engine

Consumes transformed events from Kafka, and uses the Winchester pipeline to process them.

Distiller

No changes required.

Winchester

  • Add Support for multi-tenancy
  • Dynamically update pipelines.
  • Add and delete pipeline definitions at run-time. Currently, the Winchester pipelines needs to be created at start-up time.
    • Supply pipeline definitions in methods, not yaml files. Winchester currently reads the pipeline configuration information from yaml files at start-up time.
  • Create pipeline handler that publishes notification events such that the Notification Engine can consume them.

Threshold Engine

Update to generate more general alarm state transition events.

MySQL

  • Initialize Winchester schemas
  • Initialize Monasca transforms and pipeline schemas.