Difference between revisions of "Zaqar/bp/havana/perf-testing"
< Zaqar
(→Marconi: Automated Performance Testing) |
m (Malini moved page Marconi/bp/havana/perf-testing to Zaqar/bp/havana/perf-testing: Project Rename) |
(No difference)
|
Latest revision as of 18:42, 7 August 2014
Contents
Marconi: Automated Performance Testing
See also
- https://etherpad.openstack.org/p/marconi-benchmark-plans
- https://etherpad.openstack.org/p/juno-marconi-benchmarking
Requirements
- Run performance tests for every patch that is submitted (possibly inject comments in Gerrit?)
- Ability to run tests manually (push-button, given a specific fork/commit)
- Minimal yet production-like, HA deployment of Marconi
- Test different backends
- Setup backends with different amounts of initial messages, etc. (to test indexes)
- Basic TPS measurements
- Test end-to-end scenarios (post a message in client A, claim a message in client B, delete the message in client B)
- Graph results between runs, as well as within a single run with increasing amounts of load
- Throughput (transactions per second)
- Latency (ms)
- Errors (%)
Nice-to-have
- Collect and correlate CPU, RAM, network, and DB KPIs (locks, swapping, etc.)
- Create a toolkit that can be used by the broader
Tools to investigate
- Rally
- Sensu
- Graphite