Distributed Monitoring

= Distributed Monitoring =

Overview
Monitoring and its application are becoming key factor for service lifecycle management of various systems such as NFV and cloud native platform. Distributed monitoring approach is one of the framework which enables flexible and scalable monitoring that can work with current OpenStack telemetry and monitoring framework. In this documantation, you can find the architecture and how to implement this framework to your environment.

Who will be interested in ?

 * For infrastracture operators who want to collect detailed data in short intarval in their compute nodes.
 * For NFV operators who want to know abnormal behaivers of Virtual Network Functions.

Architecture
In this architecture includes several functions for monitoring, collector, in-memory database, analysis engine and notification. Below picture shows the architecture of distributed monitoring. Each compute node has it own monitoring function following this architecture.

TODO: insert images

Function

 * Poller/Notification process collect data from guest OSs and host OS using SNMP protocol, libvirt API, OpenStack API, etc.
 * Collector format data suitable for in-memory database and insert these data into database.
 * Analytics Engine analyzes data on in-memory database, you can use several analytics engine like machine learning libraries. And also you can use evaluator that function is threshold monitoring directly.
 * Transmitter send analytics results and alarms that are caught on threshold to Operation Support System, OpenStack API and Orchestrator.

Feature

 * Short interval
 * You can collect several data in short interval like 0.1 sec. Because collect agent doesn't have to collect data from huge amount of computing nodes and VMs in this architecture. That means the load of agent become lower than centralized monitoring architecture.
 * Scalable
 * This architecture has high scalability. Because each compute node has it own monitoring function, so you don't have to caluculate specs of nodes for monitoring.
 * Fast detection
 * In some case, MQ is bottlneck of performances. But any MQ isn't used in this architecture, because monitoring process is closed in compute node.

Use Cases

 * Memory Leak
 * First use case is memory leak detection, not only for compute node, also virtual machine running in compute node. In case of out-of-memory (OOM), a corresponding node could be out of control suddenly, hence cloud administrator needs to identify such condition before its uncontrollable state. Distributed monitoring can retrieve memory usage in short interval and identify the memory leak using machine learning by scikit-learn.
 * Micro Burst Traffic
 * Virtual machine's network statistics are very important for network operation, especially Network Function Virtualization (NFV) use-cases. Operator is watching network function (VNF) always to keep the network healthy. Unexpectedly network goes to trouble due to 'micro burst', i.e. massive traffic in very short duration, and it is hard to identify because the duration is very short than monitoring interval. Distributed monitoring enables to monitor network stats in very short interval (e.g. 0.1sec) and identify the target node.
 * Abnormal behaviour of software/hardware
 * Distributed monitoring enables to monitor various parameters without to communicate to controller node, with low latency, hence it could be utilized to identify abnormal state of virtual machine and hypervisor.

Implementation
Some external tools are useful for distributed monitoring. You can learn an example of how to implement distributed monitoring with open source software.

collectd
collectd is a daemon which collects system and application performance metrics periodically and provides mechanisms to store the values in a variety of ways. You can use collectd as Poller/Notification to gather metrics and Collector to store the metrics in Redis. You can implement Analytics Engine and Transmitter as collectd's plugins.

collectd's official page

Redis
Redis is an in-memory data structure store, used as a database, cache and message broker.

Redis's official page

scikit-learn
scikit-learn is simple and efficient tools for data mining and data analysis. You can use scikit-learn as light weight machine learning library for Analytics Engine.

scikit-learn's official page

OpenStack and linux packages

 * 1) Setup OpenStack environment using DevStack or manual installation. See OpenStack Installation Guide.
 * 2) Install collectd, redis and scikit-learn. See official pages.

collectd
You can implement distributed monitoring by confinguring collectd and making collectd's plugins. See manpage of collectd.conf and collectd plugin architecture.

If you want to see an example for implementation of distributed monitoring, see distributed monitoring github repository.

collectd has plugins to collect standard metrics such as cpu/memory/disk/network utiliziation. See read type plugins on collectd's table of plugins.
 * Poller

You can store the metrics in Redis with collectd Write Redis plugin. You need to put configuration into collectd's config file as follows:  LoadPlugin write_redis   Host "localhost" Port "6379" Timeout 1000  
 * Collector

Make collectd plugin for analysis. The plugin is python script including reading redis data, analysis with scikit-learn and sending the analysis result to Transmitter. A rough sketch of the script is as follows:  import collectd import redis
 * Analytics Engine

def read: conn = redis.StrictRedis(host='localhost', port=6379) rawlist = conn.zrange('collectd/localhost/memory/memory-used',                               -2, -1) ...   # (analysis with scikit-learn) ...   vl = collectd.Values(host='localhost', plugin='dma', type='gauge') vl.dispatch(values=[result])

collectd.register_read(read, 1)

If you create the above script with filename /opt/dma/lib/analysis.py, you need to put configuration into collectd's config file to load the plugin as follows:  LoadPlugin python  ModulePath "/opt/dma/lib" LogTraces true Interactive false Import "analysis"

  

Make collectd plugin to transmit analysis result with python. A rough sketch of the script is as follows:  import collectd def notify(vl, data=None): # vl is the metrics data including severity, time and value. ...   # (transmit with metrics data, e.g. execute openstack command with python-openstackclient) ...
 * Transmitter

collectd.register_notification(notify)

If you create the above script with filename /opt/dma/lib/write_openstack.py, you need to put configuration into collectd's config file to load the plugin and set transmission policy as follows:  LoadPlugin python  ModulePath "/opt/dma/lib" LogTraces true Interactive false Import "write_openstack"

  </Plugin>

LoadPlugin "threshold" <Plugin "threshold"> <Host "localhost"> <Plugin "dma"> <Type "gauge"> WarningMax 0 Hits 3 </Type> </Plugin> </Host> </Plugin>

Redis
We recommend that you disable saving data of Redis DB on disk. You need to edit "save" lines of redis.conf as follows: <pre class="sourceCode none"> save ""
 * 1) save 900 1
 * 2) save 300 10
 * 3) save 60 10000

Who is contributing to this guide?
Contact: dma-discuss@redhat.com
 * Yuki Kasuya
 * Toshiaki Takahashi
 * Tomofumi Hayashi