Jump to: navigation, search

Difference between revisions of "Zaqar/bp/havana/perf-testing"

(First draft)
 
m (Malini moved page Marconi/bp/havana/perf-testing to Zaqar/bp/havana/perf-testing: Project Rename)
 
(5 intermediate revisions by one other user not shown)
Line 1: Line 1:
Automated Performance Testing for Marconi
+
= Marconi: Automated Performance Testing =
  
{| class="wikitable"
+
See also
|-
+
* https://etherpad.openstack.org/p/marconi-benchmark-plans
! Requirement !! Description
+
* https://etherpad.openstack.org/p/juno-marconi-benchmarking
|-
+
 
| Multiple rounds || Each benchmark should run several times (~5, depending on platform variability), and only the best (fastest) result should be recorded. Slower times are usually not caused by the app, but by variability in other system processes and network traffic (noisy neighbor).
+
==Requirements==
|-
+
* Run performance tests for every patch that is submitted (possibly inject comments in Gerrit?)
| Run early, run often || Fast run and slow run, also watermarking run (throttle up load until target crashes). The latter two would run once a night, while the fast run would run on every patch submitted to Gerrit.
+
* Ability to run tests manually (push-button, given a specific fork/commit)
|-
+
* Minimal yet production-like, HA deployment of Marconi
| Gating || Comment on a patch with -1 if it significantly degrades performance.
+
* Test different backends
|-
+
* Setup backends with different amounts of initial messages, etc. (to test indexes)
| Manual trigger || All tests should be able to be ran manually, either from HEAD or given a specific patch in Gerrit.
+
* Basic TPS measurements
|-
+
* Test end-to-end scenarios (post a message in client A, claim a message in client B, delete the message in client B)
| Historical graphs || Performance of HEAD should be tracked over time, with markers for major releases of the project.
+
* Graph results between runs, as well as within a single run with increasing amounts of load
|}
+
** Throughput (transactions per second)
 +
** Latency (ms)
 +
** Errors (%)
 +
 
 +
==Nice-to-have==
 +
* Collect and correlate CPU, RAM, network, and DB KPIs (locks, swapping, etc.)
 +
* Create a toolkit that can be used by the broader
 +
 
 +
==Tools to investigate==
 +
* [[Rally]]
 +
* Sensu
 +
* Graphite

Latest revision as of 18:42, 7 August 2014

Marconi: Automated Performance Testing

See also

Requirements

  • Run performance tests for every patch that is submitted (possibly inject comments in Gerrit?)
  • Ability to run tests manually (push-button, given a specific fork/commit)
  • Minimal yet production-like, HA deployment of Marconi
  • Test different backends
  • Setup backends with different amounts of initial messages, etc. (to test indexes)
  • Basic TPS measurements
  • Test end-to-end scenarios (post a message in client A, claim a message in client B, delete the message in client B)
  • Graph results between runs, as well as within a single run with increasing amounts of load
    • Throughput (transactions per second)
    • Latency (ms)
    • Errors (%)

Nice-to-have

  • Collect and correlate CPU, RAM, network, and DB KPIs (locks, swapping, etc.)
  • Create a toolkit that can be used by the broader

Tools to investigate