Jump to: navigation, search

Difference between revisions of "Zaqar/bp/havana/perf-testing"

m
(Marconi: Automated Performance Testing)
Line 1: Line 1:
 
= Marconi: Automated Performance Testing =
 
= Marconi: Automated Performance Testing =
  
See also: https://etherpad.openstack.org/p/marconi-benchmark-plans
+
See also
 +
* https://etherpad.openstack.org/p/marconi-benchmark-plans
 +
* https://etherpad.openstack.org/p/juno-marconi-benchmarking
  
 
==Requirements==
 
==Requirements==

Revision as of 21:58, 5 May 2014

Marconi: Automated Performance Testing

See also

Requirements

  • Run performance tests for every patch that is submitted (possibly inject comments in Gerrit?)
  • Ability to run tests manually (push-button, given a specific fork/commit)
  • Minimal yet production-like, HA deployment of Marconi
  • Test different backends
  • Setup backends with different amounts of initial messages, etc. (to test indexes)
  • Basic TPS measurements
  • Test end-to-end scenarios (post a message in client A, claim a message in client B, delete the message in client B)
  • Graph results between runs, as well as within a single run with increasing amounts of load
    • Throughput (transactions per second)
    • Latency (ms)
    • Errors (%)

Nice-to-have

  • Collect and correlate CPU, RAM, network, and DB KPIs (locks, swapping, etc.)
  • Create a toolkit that can be used by the broader

Tools to investigate