Zaqar/bp/havana/perf-testing

= Marconi: Automated Performance Testing =

See also
 * https://etherpad.openstack.org/p/marconi-benchmark-plans
 * https://etherpad.openstack.org/p/juno-marconi-benchmarking

Requirements

 * Run performance tests for every patch that is submitted (possibly inject comments in Gerrit?)
 * Ability to run tests manually (push-button, given a specific fork/commit)
 * Minimal yet production-like, HA deployment of Marconi
 * Test different backends
 * Setup backends with different amounts of initial messages, etc. (to test indexes)
 * Basic TPS measurements
 * Test end-to-end scenarios (post a message in client A, claim a message in client B, delete the message in client B)
 * Graph results between runs, as well as within a single run with increasing amounts of load
 * Throughput (transactions per second)
 * Latency (ms)
 * Errors (%)

Nice-to-have

 * Collect and correlate CPU, RAM, network, and DB KPIs (locks, swapping, etc.)
 * Create a toolkit that can be used by the broader

Tools to investigate

 * Rally
 * Sensu
 * Graphite