Jump to: navigation, search

Difference between revisions of "Zaqar/bp/havana/perf-testing"

(Marconi: Automated Performance Testing)
m
Line 1: Line 1:
== Marconi: Automated Performance Testing ==
+
= Marconi: Automated Performance Testing =
  
 
See also: https://etherpad.openstack.org/p/marconi-benchmark-plans
 
See also: https://etherpad.openstack.org/p/marconi-benchmark-plans
  
===Requirements===
+
==Requirements==
 
* Run performance tests for every patch that is submitted (possibly inject comments in Gerrit?)
 
* Run performance tests for every patch that is submitted (possibly inject comments in Gerrit?)
 
* Ability to run tests manually (push-button, given a specific fork/commit)
 
* Ability to run tests manually (push-button, given a specific fork/commit)
Line 16: Line 16:
 
** Errors (%)
 
** Errors (%)
  
===Nice-to-have===
+
==Nice-to-have==
 
* Collect and correlate CPU, RAM, network, and DB KPIs (locks, swapping, etc.)
 
* Collect and correlate CPU, RAM, network, and DB KPIs (locks, swapping, etc.)
 
* Create a toolkit that can be used by the broader
 
* Create a toolkit that can be used by the broader
  
===Tools to investigate===
+
==Tools to investigate==
 
* [[Rally]]
 
* [[Rally]]
 
* Sensu
 
* Sensu
 
* Graphite
 
* Graphite

Revision as of 20:58, 29 April 2014

Marconi: Automated Performance Testing

See also: https://etherpad.openstack.org/p/marconi-benchmark-plans

Requirements

  • Run performance tests for every patch that is submitted (possibly inject comments in Gerrit?)
  • Ability to run tests manually (push-button, given a specific fork/commit)
  • Minimal yet production-like, HA deployment of Marconi
  • Test different backends
  • Setup backends with different amounts of initial messages, etc. (to test indexes)
  • Basic TPS measurements
  • Test end-to-end scenarios (post a message in client A, claim a message in client B, delete the message in client B)
  • Graph results between runs, as well as within a single run with increasing amounts of load
    • Throughput (transactions per second)
    • Latency (ms)
    • Errors (%)

Nice-to-have

  • Collect and correlate CPU, RAM, network, and DB KPIs (locks, swapping, etc.)
  • Create a toolkit that can be used by the broader

Tools to investigate