Difference between revisions of "Zaqar/bp/havana/perf-testing"
< Zaqar
(First draft) |
|||
Line 1: | Line 1: | ||
− | Automated Performance Testing for Marconi | + | == Automated Performance Testing for Marconi == |
− | + | See also: https://etherpad.openstack.org/p/marconi-benchmark-plans | |
− | + | ||
− | + | Requirements | |
− | + | * Run performance tests for every patch that is submitted (possibly inject comments in Gerrit?) | |
− | + | * Ability to run tests manually (push-button, given a specific fork/commit) | |
− | + | * Minimal yet production-like, HA deployment of Marconi | |
− | + | * Test different backends | |
− | + | * Setup backends with different amounts of initial messages, etc. (to test indexes) | |
− | + | * Basic TPS measurements | |
− | + | * Test end-to-end scenarios (post a message in client A, claim a message in client B, delete the message in client B) | |
− | + | * Graph results between runs, as well as within a single run with increasing amounts of load | |
− | + | ** Throughput (transactions per second) | |
− | + | ** Latency (ms) | |
− | + | ** Errors (%) | |
+ | |||
+ | Nice-to-have | ||
+ | * Collect and correlate CPU, RAM, network, and DB KPIs (locks, swapping, etc.) | ||
+ | * Create a toolkit that can be used by the broader |
Revision as of 16:42, 28 April 2014
Automated Performance Testing for Marconi
See also: https://etherpad.openstack.org/p/marconi-benchmark-plans
Requirements
- Run performance tests for every patch that is submitted (possibly inject comments in Gerrit?)
- Ability to run tests manually (push-button, given a specific fork/commit)
- Minimal yet production-like, HA deployment of Marconi
- Test different backends
- Setup backends with different amounts of initial messages, etc. (to test indexes)
- Basic TPS measurements
- Test end-to-end scenarios (post a message in client A, claim a message in client B, delete the message in client B)
- Graph results between runs, as well as within a single run with increasing amounts of load
- Throughput (transactions per second)
- Latency (ms)
- Errors (%)
Nice-to-have
- Collect and correlate CPU, RAM, network, and DB KPIs (locks, swapping, etc.)
- Create a toolkit that can be used by the broader