Monasca/Monitoring Of Monasca

Goals and Deliverables

 * Default alarm severity and descriptions
 * Out of the box general purpose monitoring metrics and alarms available for all parts (services, applications, OS) that make up a Monasca installation.
 * A dashboard for the Monasca specific components to monitor the health.
 * Each component should have metrics to give a view of the service that is useful for thresholds, debugging and capacity planning
 * CLI tools to complement the UI capable of displaying Monasca details
 * monasca-collector info
 * monasca-forwarder info


 * Metrics
 * Pre-configured Alarm definitions for all core services with reasonable general purpose thresholds


 * Easy to see if the service is up or down
 * Status, capacity, throughput, and latency with reasonable defaults out of the box
 * Standard convention for metrics with some reserved names monasca-agent

''There are exceptions when there are shared components like MySQL where other OpenStack components might be influencing performance or availability. The shared database would be generically labeled and not specifically identified as a Monasca component.''

User Stories

 * As an end user the first thing I want to see after installing Monasca is a dashboard showing the status, capacity, and latency of my Monasca installation.
 * As an end user deploying Monasca either individually, via CI, Vagrant, or using the installer, I want an initial dashboard showing the status of Monasca.
 * As an operator I want a simple and concise view of the health of the Monasca service.
 * As an operator or provider I want metrics for all Monasca components that will describe the status, capacity, and latency of each component.

Commits

 * Dashboard
 * Grafana board
 * Moved to setting up alarms with a role so they can be used more widely
 * Added the default alarms role
 * New monasca-vagrant role for global alarms
 * Apache Storm and Threshold Engine StatsD monitoring
 * Ansible config file update pull request for Storm/Thresh Engine
 * Grafana board update for Storm and Threshold Engine
 * Vertica plugin
 * InfluxDB plugin
 * Dropwizard plugin (API, Persister, Thresh)
 * Grafana board (Vertica, Persister, InfluxDB, API)

Future Feature Considerations

 * Support for adding dimensions as a list

Metrics libraries currently used and available

 * Java: statsd, dropwizard
 * Python: yammer metrics library

Measurements

 * Messages per second. Alarm trigger if it is below threshold.

Architectural Components
Off the shelf open components
 * Apache Kafka (message queue)
 * MySQL (alarm, notifications database)
 * InfluxDB (metrics, logging, events database)
 * Apache Storm (realtime stream processor)
 * Apache Zookeeper (resource coordinator)
 * Operating System

Monasca components
 * API
 * Agent
 * Notification engine
 * Threshold engine
 * Persister

Component Status
Agent
 * Collection time (existing)
 * Emit time (existing)
 * Message error rate needs to be added. Add error count and rate and alarm on the rate. Only one needs to be alarmed.
 * Future possibly performance number_of_messages_sent
 * Move the metric from the collector to the forwarder. Would be a much more useful measurement.
 * Keystone auth errors need to added. Tells us if there's an authentication problem.

API
 * Goal to have the same metrics for Java and Python currently to share alarms.
 * Current Python API does not have metrics.
 * Current metric is status (UP/DOWN)

Notification engine
 * Currently using statsd

Threshold engine
 * ack-count.[COMPONENT]-bolt_default
 * ack-count.metrics-spout_default
 * emit-count.alarm-creation-stream
 * execute-count.[COMPONENT]-bolt_default
 * execute-count.event-spout_default
 * execute-count.filtering-bolt_alarm-creation-stream
 * execute-count.filtering-bolt_default
 * execute-count.metrics-spout_default
 * execute-latency.[COMPONENT]-bolt_default
 * execute-latency.event-spout_default
 * execute-latency.filtering-bolt_alarm-creation-stream
 * execute-latency.filtering-bolt_default
 * execute-latency.metrics-spout_default
 * process-latency.[COMPONENT]-bolt_default
 * process-latency.metrics-spout_default
 * transfer-count.alarm-creation-stream

Persister
 * Goal to have the same metrics for Java and Python currently to share alarms.
 * Current Python API does not have metrics.
 * Current metric is status (UP/DOWN)

Operating System
 * Currently has plugin

MySQL (alarm, notifications database)
 * Lots of existing metrics so none needed
 * Currently has plugin

Apache Kafka (message queue)
 * Lots of existing metrics so none needed
 * Currently has plugin

Apache Zookeeper (resource coordinator)
 * Lots of existing metrics so none needed
 * Currently has plugin

InfluxDB (metrics, logging, events database)
 * No metrics at all
 * TBD future

Apache Storm (realtime stream processor)
 * Also has UI and metrics enabled
 * GC_ConcurrentMarkSweep.count
 * GC_ConcurrentMarkSweep.timeMs
 * GC_ParNew.count
 * GC_ParNew.timeMs
 * ack-count.system_tick
 * emit-count.default
 * emit-count.metrics
 * emit-count.system
 * execute-count.system_tick
 * execute-latency.system_tick
 * memory_heap.committedBytes
 * memory_heap.initBytes
 * memory_heap.maxBytes
 * memory_heap.unusedBytes
 * memory_heap.usedBytes
 * memory_heap.virtualFreeBytes
 * memory_nonHeap.committedBytes
 * memory_nonHeap.initBytes
 * memory_nonHeap.maxBytes
 * memory_nonHeap.unusedBytes
 * memory_nonHeap.usedBytes
 * memory_nonHeap.virtualFreeBytes
 * newWorkerEvent
 * process-latency.system_tick
 * receive.capacity
 * receive.population
 * receive.read_pos
 * receive.write_pos
 * sendqueue.capacity
 * sendqueue.population
 * sendqueue.read_pos
 * sendqueue.write_pos
 * startTimeSecs
 * transfer-count.default
 * transfer-count.metrics
 * transfer-count.system
 * transfer.capacity
 * transfer.population
 * transfer.read_pos
 * transfer.write_pos
 * uptimeSecs