Jump to: navigation, search

Trove/DBInstanceLogOperationV2

Dependency

Dependent spec

Mission

Provide specific API interface to end-user which would allow to manipulate with database log files. This feature provides the ability to access log files via Swift so the can download them for Auditing/Troubleshooting purposes.

Why does Trove needs it ?

Short BIO. Audit.

Auditing is a core component to compliance and security programs, and a generally accepted practice by IT operations. Relational databases were the first enterprise application to embed auditing as a native platform feature. Yet part of this heritage is the stigma associated with auditing. Vendors provided the basic function, they failed to provide the performance and ease of management demanded by database administrators (DBAs), who remain loathe to enable the feature, and remain one of the last holdouts for accepting the use of database audit trails.
The reasons for their distaste of native audit are fully justified: from performance and data management nightmares, to the fact that these audit trails were not originally designed for the tasks they are being applied to. Regardless, regulatory compliance demands accurate and complete records of transactions, and relational database audit trails produce just that.
Security and compliance drive the use of database auditing, and these log files provide a unique perspective that needs to be captured. Database vendors are building better auditing features into their products, minimizing historic impediments, but careful deployment is required to avoid these known issues.

Description

Log manipulations are designed to let user perform log investigations. Since Trove is PaaS - level project, it's user cannot interact with compute instance directly, only with database through given API (database operations).

Justification/Benefits

Justification

Perfomance tuning based on log file analyze.

Database throughput is always limited by the mathimatical equations based upon available resources, after reaching defined limit database starts throwing exceptions that are logged in general log(cassandra system.log, redis server log) or special error log (mysql, percona, maria, etc.).

Database audit.

Auditing is a core component to compliance and security programs, and a generally accepted practice by IT operations. Relational databases were the first enterprise application to embed auditing as a native platform feature. Yet part of this heritage is the stigma associated with auditing. Vendors provided the basic function, they failed to provide the performance and ease of management demanded by database administrators (DBAs), who remain loathe to enable the feature, and remain one of the last holdouts for accepting the use of database audit trails.Database auditing is the examination of audit or transaction logs for the purpose of tracking changes with data or database structure. Databases can be set to capture alterations to data and metadata.
Database Management will be audited approximately once every three years using a risk-based approach. Databases to be reviewed include those databases supporting mission critical university functions. The following topics should be addressed during the review:
  • Database Administration (policies and procedures, capacity planning)
  • Database Maintenance and Tuning (operational effectiveness)
  • Database Integrity (referential integrity, triggers)
  • Database Security (database security settings, auditing/logging)
  • Database Backup and Recovery (availability)

Database startup Issues resolving based upon error/general log analyz.

Example: Mysql. The error log contains information indicating when mysqld was started and stopped and also any critical errors that occur while the server is running. If mysqld notices a table that needs to be automatically checked or repaired, it writes a message to the error log. Cassandra. Heap size errors based on automated memory allocation while database service launching (http://stackoverflow.com/questions/16243434/getting-outofmemory-in-cassandra-java-heap-space-version-1-0-7-frequently-on-dif).
The minimum requirements set forth in the “general overview and risk assessment” section below must be completed for the audit to qualify for core audit coverage. Following completion of the general overview and risk assessment, the auditor will use professional judgment to select specific areas for additional focus and audit testing.

Benefits

From the user perspective, this feature completely covers real-worl use cases mentioned in justification section (management, perfomance tuning, audit, etc.)

Impacts

Thi feature would not affect/break current Trove API. It changes the attitute to the actual Trove instance from the simple database server with connection URL to something bigger. It affects in-accessability of the instance, that is restricted by the terms of use of the public/private cloud and affects the PaaS term.

Database

Database changes are not required because dblogs are not tracked at the Trove backend as a resource that can be re-used in future.

Configuration

Guest configuration
  • scheduled_rotation = monthly, weekly, daily (string)
  • log_size = 100M, 1G (String)
  • rotate_count = 100, 1000 (Int)

How does it works

More information at LOGROTATE.D

Public API Details

Create and save database logging file entry that contains N last lines. HTTP method POST

Route: /{tenant_id}/instance/{id}/dblogs

Request

   {
       "dblogs": {
         "filename" : "general_log",
         "lines_count" : 1000
       }
   }


Response

   {
       "dblog": {
         "instance_id" : "12345678-1111-2222-3333-444444444444",
         "file" : "mysql.log",
         "location" : "http://somewhere.com/container/file"
         "retrieved" : "1000 lines",
       }
   }

CLI style:

  trove dblog-create general_log <instance> --lines 1000

Log files rotation

We could define same time frames for all datastores by adding new template called “rotation.template”. Example:

templates/mysql/rotation.template (same for each mysql based database, such as percona, mariadb, galera):

       /var/log/mysql.log /var/log/mysql/mysql.log /var/log/mysql/mysql-slow.log /var/log/mysql/error.log {
               "{"{" how_often "}"}" # "monthly" or "weekly"
               log_size "{"{" actual_seze "}"}" # 1000M
               rotate "{"{" actual rotation count "}"}" #100
               missingok
               create 640 mysql adm
               compress
               sharedscripts
               postrotate
                      test -x /usr/bin/mysqladmin || exit 0
                      # If this fails, check debian.conf!
                      MYADMIN="/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf"
                      if [ -z "`$MYADMIN ping 2>/dev/null`" ]; then
                              # Really no mysqld or rather a missing debian-sys-maint user?
                             # If this occurs and is not a error please report a bug.
                             #if ps cax | grep -q mysqld; then
                            if killall -q -s0 -umysql mysqld; then
                                exit 1
                            fi
                      else
                            $MYADMIN flush-logs
                      fi
               endscript
     }

templates/cassandra/rotation.template:

       /var/log/cassandra/output.log {
               "{"{" how_often "}"}" # "monthly" or "weekly"
               log_size "{"{" actual_seze "}"}" # 1000M
               rotate "{"{" actual rotation count "}"}" #100
               missingok
               copytruncate
               compress
    }

templates/redis/rotation.template:

       /var/log/redis/*.log {
               "{"{" how_often "}"}" # "monthly" or "weekly"
               log_size "{"{" actual_seze "}"}" # 1000M
               rotate "{"{" actual rotation count "}"}" #100
              copytruncate
              compress
              missingok
       }

Placement strategy

Each “rotation.template” should be placed into special place at the image:
         mysql rotation config should be places to: /etc/logrotate.d/mysql-server
         cassandra rotation config should placed to: /etc/logrotate.d/cassandra

Raised questions

  1. When user needs to access log files?
  2. How often user needs to pull logs from database instance?
  3. Can syslog be the solution?
  4. Any plans for log files rotation policy?

Answers to raised questions

When user needs to access log files?

The best way to ask the question is to provide an example. Example: suppose we got the instance that is in SHUTDOWN state, it means that database service is down. How do the user able to find out why is it shutdowned? Only database log(s) could answer this questions. After that user would know what goes wrong with the database and be able to fix the accured issue.

How often user needs to pull logs from database instance?

The answer is - when user wants them, suppose we got the instance that is in SHUTDOWN state, it means that database service is down. How do the user able to find out why is it shutdowned? Only database log(s) could answer this questions. Short answer - depends on user's needs.

Can syslog be the solution?

Short answer - no, it would not work with Swift. Because Swift works through client and accepts only http (tcp-transport) streams, but syslog server works upon UDP protocol. And it means that we cannot relay on possible remote syslog server that could work or not and we couldn’t relay on proprietary solution that are not the part of the cloud. We have only one consistent storage per cloud deployment - Swift. Short answer - outside the cloud syslog is the best solution, inside the cloud its not, until there’s some kind of Log-storage(syslog)-as-a-Service.

Any plans for log files rotation policy?

As part of the iteration 2 we could define same time frames for all datastores by adding new template called “rotation.template”


Getting the part of the given log by the proposed time-frame?

IMHO it would be hard to manage, we need to define special data format. It should fit to all datastore logging formats, but it’s almost impossible because each database uses its own logging framework from the stack of the technologies that used for the development process.

Periodic task or the scheduled task for flushing logs before the rotation?

It’s possible and even required, since we would set the rotation timeframe, we could set the periodic task that will flush, i guess, all logs. Timeframe equals to one month.