Jump to: navigation, search

Difference between revisions of "Zaqar/bp/havana/perf-testing"

(First draft)
 
Line 1: Line 1:
Automated Performance Testing for Marconi
+
== Automated Performance Testing for Marconi ==
  
{| class="wikitable"
+
See also: https://etherpad.openstack.org/p/marconi-benchmark-plans
|-
+
 
! Requirement !! Description
+
Requirements
|-
+
* Run performance tests for every patch that is submitted (possibly inject comments in Gerrit?)
| Multiple rounds || Each benchmark should run several times (~5, depending on platform variability), and only the best (fastest) result should be recorded. Slower times are usually not caused by the app, but by variability in other system processes and network traffic (noisy neighbor).
+
* Ability to run tests manually (push-button, given a specific fork/commit)
|-
+
* Minimal yet production-like, HA deployment of Marconi
| Run early, run often || Fast run and slow run, also watermarking run (throttle up load until target crashes). The latter two would run once a night, while the fast run would run on every patch submitted to Gerrit.
+
* Test different backends
|-
+
* Setup backends with different amounts of initial messages, etc. (to test indexes)
| Gating || Comment on a patch with -1 if it significantly degrades performance.
+
* Basic TPS measurements
|-
+
* Test end-to-end scenarios (post a message in client A, claim a message in client B, delete the message in client B)
| Manual trigger || All tests should be able to be ran manually, either from HEAD or given a specific patch in Gerrit.
+
* Graph results between runs, as well as within a single run with increasing amounts of load
|-
+
** Throughput (transactions per second)
| Historical graphs || Performance of HEAD should be tracked over time, with markers for major releases of the project.
+
** Latency (ms)
|}
+
** Errors (%)
 +
 
 +
Nice-to-have
 +
* Collect and correlate CPU, RAM, network, and DB KPIs (locks, swapping, etc.)
 +
* Create a toolkit that can be used by the broader

Revision as of 16:42, 28 April 2014

Automated Performance Testing for Marconi

See also: https://etherpad.openstack.org/p/marconi-benchmark-plans

Requirements

  • Run performance tests for every patch that is submitted (possibly inject comments in Gerrit?)
  • Ability to run tests manually (push-button, given a specific fork/commit)
  • Minimal yet production-like, HA deployment of Marconi
  • Test different backends
  • Setup backends with different amounts of initial messages, etc. (to test indexes)
  • Basic TPS measurements
  • Test end-to-end scenarios (post a message in client A, claim a message in client B, delete the message in client B)
  • Graph results between runs, as well as within a single run with increasing amounts of load
    • Throughput (transactions per second)
    • Latency (ms)
    • Errors (%)

Nice-to-have

  • Collect and correlate CPU, RAM, network, and DB KPIs (locks, swapping, etc.)
  • Create a toolkit that can be used by the broader