Jump to: navigation, search

Trove/DBInstanceLogOperationV1

< Trove
Revision as of 15:14, 11 March 2014 by Denis M. (talk | contribs) (Log files rotation)

Mission

Provide specific API interface to end-user which would allow to manipulate with database log files. This feature provides the ability to access log files via Swift so the can download them for Auditing/Troubleshooting purposes.

Why does Trove needs it ?

Short BIO. Audit.

Auditing is a core component to compliance and security programs, and a generally accepted practice by IT operations. Relational databases were the first enterprise application to embed auditing as a native platform feature. Yet part of this heritage is the stigma associated with auditing. Vendors provided the basic function, they failed to provide the performance and ease of management demanded by database administrators (DBAs), who remain loathe to enable the feature, and remain one of the last holdouts for accepting the use of database audit trails.

The reasons for their distaste of native audit are fully justified: from performance and data management nightmares, to the fact that these audit trails were not originally designed for the tasks they are being applied to. Regardless, regulatory compliance demands accurate and complete records of transactions, and relational database audit trails produce just that.

Security and compliance drive the use of database auditing, and these log files provide a unique perspective that needs to be captured. Database vendors are building better auditing features into their products, minimizing historic impediments, but careful deployment is required to avoid these known issues.

Real-life use-cases

Perfomance tuning based on log file analyz.

Database throughput is always limited by the mathimatical equations based upon available resources, after reaching defined limit database starts throwing exceptions that are logged in general log(cassandra system.log, redis server log) or special error log (mysql, percona, maria, etc.).

Database audit.

Auditing is a core component to compliance and security programs, and a generally accepted practice by IT operations. Relational databases were the first enterprise application to embed auditing as a native platform feature. Yet part of this heritage is the stigma associated with auditing. Vendors provided the basic function, they failed to provide the performance and ease of management demanded by database administrators (DBAs), who remain loathe to enable the feature, and remain one of the last holdouts for accepting the use of database audit trails.Database auditing is the examination of audit or transaction logs for the purpose of tracking changes with data or database structure. Databases can be set to capture alterations to data and metadata.
Database Management will be audited approximately once every three years using a risk-based approach. Databases to be reviewed include those databases supporting mission critical university functions. The following topics should be addressed during the review:
  • Database Administration (policies and procedures, capacity planning)
  • Database Maintenance and Tuning (operational effectiveness)
  • Database Integrity (referential integrity, triggers)
  • Database Security (database security settings, auditing/logging)
  • Database Backup and Recovery (availability)

Database startup Issues resolving based upon error/general log analyz.

Example: Mysql. The error log contains information indicating when mysqld was started and stopped and also any critical errors that occur while the server is running. If mysqld notices a table that needs to be automatically checked or repaired, it writes a message to the error log. Cassandra. Heap size errors based on automated memory allocation while database service launching (http://stackoverflow.com/questions/16243434/getting-outofmemory-in-cassandra-java-heap-space-version-1-0-7-frequently-on-dif).
The minimum requirements set forth in the “general overview and risk assessment” section below must be completed for the audit to qualify for core audit coverage. Following completion of the general overview and risk assessment, the auditor will use professional judgment to select specific areas for additional focus and audit testing.

Design

Log manipulations are designed to let user perform log investigations. Since Trove is PaaS - level project, it's user cannot interact with compute instance directly, only with database through given API (database operations). Deployer would decide which log files would be available for trove user.

API Schema

API Details

Three new resources, dblog-create, dblog-list, dblog-show will be exposed as part of the Trove API.
The dblog-list is used to provide an ability to list all available(availability defined by Trove) database logging filenames.
The dblog-show is used to provide an ability to list all available(availability defined by Trove) database logging filenames per datastore version.
The dblog-create is used to provide an ability to save database logging file into the Swift container.
To implement this capability, the create/modify/list instance operations will be extended in a manner that does not break the existing 1.0 contract. These operations will permit a user to create a new database logging file entry for already existed instance, list all available database logging filenames for all registered datstore versions (basically, for all datastore managers across all versions), show all available database logging filenames per certain datastore version (manager).

DownloadDBLogFile request parameters and DBLog response object

Description: Downloads current database log file into Swift container.

Request Parameters:

Parameter name Description Type Required
Instance ID or Name The customer-assigned name of the instance that contains the log files. String Yes
Database logging filename Available database logging filename String Yes

Response Elements

Name Description Type Errors
LogFileData The following elements are returned in a structure named DBLog DBLog DBInstanceNotFound. HTTP 404

DBLog response object

DBLog is responsible for database log file stored in Swift container
Name Description Type
Instance ID instance UUID ID String
Filename in container Log file name String

API Calls


Get the list of all available database logging files for all datastore versions. HTTP method GET

Request

(No message body)

Response

   {
       "dblogs": {
                          {
                             "datastore_version_manager": "mysql",
                             "datastore_log_files": "general_log, log_slow_queries, log-error"
                          },
                          {
                             "datastore_version_manager": "cassandra"
                             "datastore_log_files": "system_log"
                          },
                          {
                             "datastore_version_manager": "redis"
                             "datastore_log_files": "system_log"
                          }
       }
   }

Get the list of all available database logging files per given datastore version. HTTP method GET

Response

   {
       "dblogs": {
                          {
                             "datastore_version_manager": "mysql",
                             "datastore_log_files": "general_log, log_slow_queries, log-error"
                          }
       }
   }

Create and save database logging file entry. HTTP method POST

Request

   {
       "dblogs": {
         "instance_id" : "12345678-1111-2222-3333-444444444444",
         "filename" : "general_log"
       }
   }


Response

   {
       "dblog": {
         "instance_id" : "12345678-1111-2222-3333-444444444444",
         "file" : "mysql.log",
         "location" : "http://somewhere.com/container/file"
       }
   }

Server-side configuration

Trove taskmanager and api services would require next conf values:
  1. allow_database_logging = True/False
  2. allow_database_log_files_audit = True/False

Guest-side configuration

Same as server side, guest side requires several configuration values:
  1. naming convention: {instance_id} + {path according to income filename} + daytime.log
  2. manifest convention: *.log or *tar.bz2 or *tar.gz
  3. Storage Strategy: Swift
  4. Container: logs_files

Iteration 2

Polling N lines of given log file

Adding new parameter to boyd of the POST request - lines

Create and save database logging file entry. HTTP method POST

Request

   {
       "dblogs": {
         "instance_id" : "12345678-1111-2222-3333-444444444444",
         "filename" : "general_log",
         "lines": "1000"
       }
   }


Response

   {
       "dblog": {
         "instance_id" : "12345678-1111-2222-3333-444444444444",
         "file" : "mysql.log",
         "location" : "http://somewhere.com/container/file"
       }
   }

CLI style:

  trove dblog-create general_log <instance> --lines 1000

Log files rotation

We could define same time frames for all datastores by adding new template called “rotation.template”. Example:

templates/mysql/rotation.template:

       /var/log/mysql.log /var/log/mysql/mysql.log /var/log/mysql/mysql-slow.log /var/log/mysql/error.log {
       "Template:How often" # "monthly" or "weekly"
       log_size "Template:Actual seze" # 1000M
       rotate Template:Actual rotation count #100
       missingok
       create 640 mysql adm
       compress
       sharedscripts
       postrotate
               test -x /usr/bin/mysqladmin || exit 0
               # If this fails, check debian.conf!
               MYADMIN="/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf"
               if [ -z "`$MYADMIN ping 2>/dev/null`" ]; then
                 # Really no mysqld or rather a missing debian-sys-maint user?
                 # If this occurs and is not a error please report a bug.
                 #if ps cax | grep -q mysqld; then
                 if killall -q -s0 -umysql mysqld; then
                   exit 1
                 fi
               else
                 $MYADMIN flush-logs
               fi
       endscript
     }

templates/cassandra/rotation.template:

       /var/log/cassandra/output.log {
       "Template:How often" # "monthly" or "weekly"
       log_size "Template:Actual seze" # 1000M
       rotate Template:Actual rotation count #100
       missingok
       copytruncate
       compress
    }

Placement strategy

Each “rotation.template” should be placed into special place at the image:
         mysql rotation config should be places to: /etc/logrotate.d/mysql-server
         cassandra rotation config should placed to: /etc/logrotate.d/cassandra

Raised questions

  1. When user needs to access log files?
  2. How often user needs to pull logs from database instance?
  3. Can syslog be the solution?
  4. Any plans for log files rotation policy?

Answers to raised questions

When user needs to access log files?

The best way to ask the question is to provide an example. Example: suppose we got the instance that is in SHUTDOWN state, it means that database service is down. How do the user able to find out why is it shutdowned? Only database log(s) could answer this questions. After that user would know what goes wrong with the database and be able to fix the accured issue.

How often user needs to pull logs from database instance?

The answer is - when user wants them, suppose we got the instance that is in SHUTDOWN state, it means that database service is down. How do the user able to find out why is it shutdowned? Only database log(s) could answer this questions. Short answer - depends on user's needs.

Can syslog be the solution?

Short answer - no, it would not work with Swift. Because Swift works through client and accepts only http (tcp-transport) streams, but syslog server works upon UDP protocol. And it means that we cannot relay on possible remote syslog server that could work or not and we couldn’t relay on proprietary solution that are not the part of the cloud. We have only one consistent storage per cloud deployment - Swift. Short answer - outside the cloud syslog is the best solution, inside the cloud its not, until there’s some kind of Log-storage(syslog)-as-a-Service.

Any plans for log files rotation policy?

As part of the iteration 2 we could define same time frames for all datastores by adding new template called “rotation.template”


Getting the part of the given log by the proposed time-frame?

IMHO it would be hard to manage, we need to define special data format. It should fit to all datastore logging formats, but it’s almost impossible because each database uses its own logging framework from the stack of the technologies that used for the development process.

Periodic task or the scheduled task for flushing logs before the rotation?

It’s possible and even required, since we would set the rotation timeframe, we could set the periodic task that will flush, i guess, all logs. Timeframe equals to one month.