OpsGuide/Logging

Where Are the Logs?
Most services use the convention of writing their log files to subdirectories of the, as listed in Table OpenStack log locations.

Reading the Logs
OpenStack services use the standard logging levels, at increasing severity: TRACE, DEBUG, INFO, AUDIT, WARNING, ERROR, and CRITICAL. That is, messages only appear in the logs if they are more “severe” than the particular log level, with DEBUG allowing all log statements through. For example, TRACE is logged only if the software has a stack trace, while INFO is logged for every message including those that are only for information.

To disable DEBUG-level logging, edit  file as follows:

Keystone is handled a little differently. To modify the logging level, edit the  file and look at the   and   sections.

Logging for horizon is configured in. Because horizon is a Django web application, it follows the Django Logging framework conventions.

The first step in finding the source of an error is typically to search for a CRITICAL, or ERROR message in the log starting at the bottom of the log file.

Here is an example of a log message with the corresponding ERROR (Python traceback) immediately following:

In this example,  failed to start and has provided a stack trace, since its volume back end has been unable to set up the storage volume—probably because the LVM volume that is expected from the configuration does not exist.

Here is an example error log:

In this error, a nova service has failed to connect to the RabbitMQ server because it got a connection refused error.

Tracing Instance Requests
When an instance fails to behave properly, you will often have to trace activity associated with that instance across the log files of various  services and across both the cloud controller and compute nodes.

The typical way is to trace the UUID associated with an instance across the service logs.

Consider the following example:

Here, the ID associated with the instance is. If you search for this string on the cloud controller in the  files, it appears in   and. If you search for this on the compute nodes in, it appears in. If no ERROR or CRITICAL messages appear, the most recent log entry that reports this may provide a hint about what has gone wrong.

Adding Custom Logging Statements
If there is not enough information in the existing logs, you may need to add your own custom logging statements to the  services.

The source files are located in.

To add logging statements, the following line should be near the top of the file. For most files, these should already be there:

To add a DEBUG logging statement, you would do:

You may notice that all the existing logging messages are preceded by an underscore and surrounded by parentheses, for example:

This formatting is used to support translation of logging messages into different languages using the gettext internationalization library. You don’t need to do this for your own custom log messages. However, if you want to contribute the code back to the OpenStack project that includes logging statements, you must surround your log messages with underscores and parentheses.

RabbitMQ Web Management Interface or rabbitmqctl
Aside from connection failures, RabbitMQ log files are generally not useful for debugging OpenStack related issues. Instead, we recommend you use the RabbitMQ web management interface. Enable it on your cloud controller:

The RabbitMQ web management interface is accessible on your cloud controller at http://localhost:55672.

An alternative to enabling the RabbitMQ web management interface is to use the  commands. For example, rabbitmqctl list_queues| grep cinder displays any messages left in the queue. If there are messages, it’s a possible sign that cinder services didn’t connect properly to rabbitmq and might have to be restarted.

Items to monitor for RabbitMQ include the number of items in each of the queues and the processing time statistics for the server.

Centrally Managing Logs
Because your cloud is most likely composed of many servers, you must check logs on each of those servers to properly piece an event together. A better solution is to send the logs of all servers to a central location so that they can all be accessed from the same area.

The choice of central logging engine will be dependent on the operating system in use as well as any organizational requirements for logging tools.

Syslog choices
There are a large number of syslogs engines available, each have differing capabilities and configuration requirements.


 * rsyslog