Jump to: navigation, search

Difference between revisions of "Monasca/Logging"

m
Line 1: Line 1:
 
This page documents the Monasca Logging solution that is in progress.
 
This page documents the Monasca Logging solution that is in progress.
  
 +
== Log Management - Client Side ==
 +
=== Monasca Log Agent ===
 +
Monitors one or more log files, adds meta information (e.g. dimensions), authenticates with KeyStone and the the logs (in a bulk) to the Monasca Log API.
 +
Base technology: logstash
 +
Plugin: https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/logstash-output-monasca_log_api
  
<br />
 
'''Note:'''<br />
 
'''Currently we only have this page to discuss our proposals. For that reason some "metric related stuff" can be found here too. We will move or remove it whenever the discussion is finished.'''
 
<br /><br />
 
 
=== Log Data Flow ===
 
The following diagram visualizes the integration of logs in the processing pipeline of Monasca. We indicated some short cuts we want to take as a first step. Also, we indicated some advanced functionality (multi-tennancy) that we plan for the future.
 
 
[[File:DrawIO LogManagement.png|bigpx|center]]
 
 
 
== Log Client ==
 
=== Monasca Agent ===
 
 
==== Collect and forward Log Data ====
 
The agent should be extended to collect log data and forward them to the Monasca Log API.
 
Implement or integrate a Logstash-like collector (e.g. Beaver). Beaver is a lightweight python log file shipper that is used to send logs to an intermediate broker for further processing by Logstash.
 
 
=== Plugins ===
 
==== Logstash Output Plugin ====
 
https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/logstash-output-monasca_api
 
 
==== Other Output Plugins ====
 
'''TODO''' e.g. for fluentd...
 
 
== Log Management Backend ==
 
Integrate ELK stack into the existing Monasca architecture.
 
Receiving logs, authentication, processing and storing of logs.
 
* https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/kibana
 
* https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/ansible-monasca-log-schema
 
* https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/ansible-monasca-elkstack
 
  
 +
== Log Management - Server Side - Consuming Logs ==
 
=== Monasca Log API ===
 
=== Monasca Log API ===
 +
Consumes logs from the agents, authorizes them and publishes to Kafka.
 
https://github.com/openstack/monasca-log-api
 
https://github.com/openstack/monasca-log-api
===== Name =====
+
https://github.com/openstack/monasca-log-api/tree/master/docs
Note: <br/>
 
Deviations Monasca Metric API documentation (https://github.com/stackforge/monasca-api/blob/master/docs/monasca-api-spec.md) and implementation (https://github.com/stackforge/monasca-api):
 
* Document: Nothing is mentioned about the allowed characters in the name
 
* Implementation:  Characters in the name are restricted to a-z A-Z 0-0 _ . -
 
  
'''The implementation of the Log API follows the implementation of the Metric API!'''
+
=== Monasca Log Transformer ===
 +
Consumes logs from Kafka, transforms them, and publishes to Kafka.
  
===== Dimensions =====
+
=== Monasca Log Persister ===
Note: <br/>
+
Consumes logs from Kafka, prepares them for bulk storage, and stores them into Elasticsearch.
Deviations Monasca Metric API documentation (https://github.com/stackforge/monasca-api/blob/master/docs/monasca-api-spec.md) and implementation (https://github.com/stackforge/monasca-api):
 
* Document: The first character in the dimension is restricted to the following: a-z A-Z 0-9 _ / \ $. However, the next characters may be any character except for the following: ; } { = , & ) ( ".
 
* Implementation: Characters in the dimensions key are restricted to a-z A-Z 0-0 _ .
 
  
'''The implementation of the Log API follows the implementation of the Metric API!'''
+
=== Monasca Log Metrics ===
===== Request Line =====
+
Consumes logs from Kafka,
* POST /v2.0/log/single - Endpoint for single and multiline logs; maximum log size: 1MB<br />
 
* POST /v2.0/log/bulk - Endpoint for bulk logs (the logs must be line-separated); maximum log size: 5MB (TODO)<br /><br />
 
  
===== Request Headers =====
+
=== Monasca Log Storage===
* Content-Type (string, required) - application/json <br />
+
All logs are stored in Elasticsearch.
* X-Auth-Token (string, required) - Keystone authentication token <br />
 
* X-Application-Type: (string, optional) - Type of application (to have a hint how to parse) <br />
 
* X-Dimensions: (string, optional) - A dictionary of (key, value) pairs to structure the logs and help later on with filtering (dashboard)<br />
 
<br />
 
  
'''Timestamps:'''<br />
+
== Log Management - Server Side - Visualizing Logs ==
The priority how to determine the timestamp:<br />
+
=== Monasca  Kibana Server ===
1) Try to parse timestamp from original log data<br />
+
Authorization and visualization of logs.
2) If that doesn't work:<br />
+
Base technology: Kibana
* RESTful API: Use receiving time as timestamp<br />
+
Plugins:
* future version: Syslog API: Take timestamp from syslog<br />
+
https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/fts-keystone
 +
https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/keystone-v3-client
  
Monasca operates with UTC timezone, that means the timestamp is convert to the concurrent time in UTC.
 
  
===== Request Body =====
+
=== Log Data Flow ===
The original (potentially unstructured) log data.
+
The following diagram visualizes the integration of logs in the processing pipeline of Monasca. We indicated some short cuts we want to take as a first step. Also, we indicated some advanced functionality (multi-tennancy) that we plan for the future.
  
===== Request Examples =====
+
[[File:DrawIO LogManagement.png|bigpx|center]]
 
 
 
 
'''Single Log - JSON'''
 
POST /v2.0/log/single HTTP/1.1
 
Content-Type: application/json
 
X-Auth-Token: 27feed73a0ce4138934e30d619b415b0
 
X-Application-Type: apache
 
X-Dimensions: applicationname:WebServer01,environment:production
 
 
{"message":"Hello World!", "from":"hoover"}
 
 
 
 
 
 
 
'''Bulk of Logs - Plain Text''' (TODO)
 
POST /v2.0/log/bulk HTTP/1.1
 
Content-Type: text/plain
 
X-Auth-Token: 27feed73a0ce4138934e30d619b415b0
 
X-Application-Type: apache
 
X-Dimensions: applicationname:WebServer01,environment:production
 
 
Hello\nWorld
 
 
 
===== Response Status Code =====
 
204 - No Content
 
 
 
===== Response Body =====
 
This request does not return a response body.
 
 
 
=== Log Transformer ===
 
Consumes logs from Kafka, transforms them, and publishes to Kafka.
 
 
 
=== Log Persister ===
 
Consumes logs from Kafka, prepares them for bulk storage, and stores them into Elasticsearch.
 
 
 
== Log Management Frontend ==
 
=== Monasca Log API ===
 
Visualization of logs.
 

Revision as of 12:24, 9 August 2016

This page documents the Monasca Logging solution that is in progress.

Log Management - Client Side

Monasca Log Agent

Monitors one or more log files, adds meta information (e.g. dimensions), authenticates with KeyStone and the the logs (in a bulk) to the Monasca Log API. Base technology: logstash Plugin: https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/logstash-output-monasca_log_api


Log Management - Server Side - Consuming Logs

Monasca Log API

Consumes logs from the agents, authorizes them and publishes to Kafka. https://github.com/openstack/monasca-log-api https://github.com/openstack/monasca-log-api/tree/master/docs

Monasca Log Transformer

Consumes logs from Kafka, transforms them, and publishes to Kafka.

Monasca Log Persister

Consumes logs from Kafka, prepares them for bulk storage, and stores them into Elasticsearch.

Monasca Log Metrics

Consumes logs from Kafka,

Monasca Log Storage

All logs are stored in Elasticsearch.

Log Management - Server Side - Visualizing Logs

Monasca Kibana Server

Authorization and visualization of logs. Base technology: Kibana Plugins: https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/fts-keystone https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/keystone-v3-client


Log Data Flow

The following diagram visualizes the integration of logs in the processing pipeline of Monasca. We indicated some short cuts we want to take as a first step. Also, we indicated some advanced functionality (multi-tennancy) that we plan for the future.

bigpx