Jump to: navigation, search

Difference between revisions of "Monasca/Logging"

m
m
Line 5: Line 5:
 
Monitors one or more log files, adds meta information (e.g. dimensions), authenticates with KeyStone and sends the logs (in a bulk) to the Monasca Log API.
 
Monitors one or more log files, adds meta information (e.g. dimensions), authenticates with KeyStone and sends the logs (in a bulk) to the Monasca Log API.
 
Base technology: logstash
 
Base technology: logstash
 +
 
Plugin: https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/logstash-output-monasca_log_api
 
Plugin: https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/logstash-output-monasca_log_api
 +
 
=== Monasca Log Agent - Beaver ===
 
=== Monasca Log Agent - Beaver ===
 
https://github.com/python-beaver/python-beaver
 
https://github.com/python-beaver/python-beaver
Line 12: Line 14:
 
=== Monasca Log API ===
 
=== Monasca Log API ===
 
Consumes logs from the agents, authorizes them and publishes to Kafka.  
 
Consumes logs from the agents, authorizes them and publishes to Kafka.  
 +
 
https://github.com/openstack/monasca-log-api
 
https://github.com/openstack/monasca-log-api
 +
 
https://github.com/openstack/monasca-log-api/tree/master/docs
 
https://github.com/openstack/monasca-log-api/tree/master/docs
  
Line 32: Line 36:
 
Base technology: Kibana  
 
Base technology: Kibana  
 
Plugins:
 
Plugins:
 +
 
https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/fts-keystone
 
https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/fts-keystone
 +
 
https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/keystone-v3-client
 
https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/keystone-v3-client
  
Line 38: Line 44:
 
== Log Data Flow ==
 
== Log Data Flow ==
 
TODO: must be updated!
 
TODO: must be updated!
 +
 
The following diagram visualizes the integration of logs in the processing pipeline of Monasca. We indicated some short cuts we want to take as a first step. Also, we indicated some advanced functionality (multi-tenancy) that we plan for the future.
 
The following diagram visualizes the integration of logs in the processing pipeline of Monasca. We indicated some short cuts we want to take as a first step. Also, we indicated some advanced functionality (multi-tenancy) that we plan for the future.
  
 
[[File:DrawIO LogManagement.png|bigpx|center]]
 
[[File:DrawIO LogManagement.png|bigpx|center]]

Revision as of 13:26, 9 August 2016

This page documents the Monasca Logging solution that is in progress.

Log Management - Client Side

Monasca Log Agent - Logstash

Monitors one or more log files, adds meta information (e.g. dimensions), authenticates with KeyStone and sends the logs (in a bulk) to the Monasca Log API. Base technology: logstash

Plugin: https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/logstash-output-monasca_log_api

Monasca Log Agent - Beaver

https://github.com/python-beaver/python-beaver

Log Management - Server Side - Consuming Logs

Monasca Log API

Consumes logs from the agents, authorizes them and publishes to Kafka.

https://github.com/openstack/monasca-log-api

https://github.com/openstack/monasca-log-api/tree/master/docs

Monasca Log Transformer

Consumes logs from Kafka, transforms them, and publishes to Kafka.

Monasca Log Persister

Consumes logs from Kafka, prepares them for bulk storage, and stores them into Elasticsearch.

Monasca Log Metrics

Consumes logs from Kafka, creates metrics for logs with severity CRITICAL, ERROR, WARNING, and publishes to Kafka.

Monasca Log Storage

All logs are stored in Elasticsearch.

Log Management - Server Side - Visualizing Logs

Monasca Kibana Server

Authorization with KeyStone and visualization of logs (stored in elasticsearch). Base technology: Kibana Plugins:

https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/fts-keystone

https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/keystone-v3-client


Log Data Flow

TODO: must be updated!

The following diagram visualizes the integration of logs in the processing pipeline of Monasca. We indicated some short cuts we want to take as a first step. Also, we indicated some advanced functionality (multi-tenancy) that we plan for the future.

bigpx