Jump to: navigation, search


< Zaqar
Revision as of 23:04, 24 September 2013 by Kgriffs (talk | contribs) (First draft)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Automated Performance Testing for Marconi

Requirement Description
Multiple rounds Each benchmark should run several times (~5, depending on platform variability), and only the best (fastest) result should be recorded. Slower times are usually not caused by the app, but by variability in other system processes and network traffic (noisy neighbor).
Run early, run often Fast run and slow run, also watermarking run (throttle up load until target crashes). The latter two would run once a night, while the fast run would run on every patch submitted to Gerrit.
Gating Comment on a patch with -1 if it significantly degrades performance.
Manual trigger All tests should be able to be ran manually, either from HEAD or given a specific patch in Gerrit.
Historical graphs Performance of HEAD should be tracked over time, with markers for major releases of the project.