Jump to: navigation, search

Difference between revisions of "Testr"

m
Line 10: Line 10:
 
Nova is the only [[OpenStack]] project to currently use testr (we ported the hard project first). To use testr with Nova you can use the existing wrappers, tox and run_tests.sh.
 
Nova is the only [[OpenStack]] project to currently use testr (we ported the hard project first). To use testr with Nova you can use the existing wrappers, tox and run_tests.sh.
  
<pre><nowiki>
+
<source lang="bash">
 
tox -epy27
 
tox -epy27
 
# or
 
# or
Line 18: Line 18:
 
# or
 
# or
 
run_tests.sh
 
run_tests.sh
</nowiki></pre>
+
</source>
  
Will all run Nova's unittests with testr. You can specify which tests you want to run with a regex (the regex is over the full path of the test eg nova.tests.test_something.[[TestSomething]].test_this_one_unit):
+
Will all run Nova's unittests with testr. You can specify which tests you want to run with a regex (the regex is over the full path of the test eg nova.tests.test_something.TestSomething.test_this_one_unit):
  
<pre><nowiki>
+
<source lang="bash">
 
tox -epy27 -- test_name_regex
 
tox -epy27 -- test_name_regex
 
# or
 
# or
 
run_tests.sh test_name_regex
 
run_tests.sh test_name_regex
</nowiki></pre>
+
</source>
  
 
testr can be quite useful when run directly as well. First source your test venv.
 
testr can be quite useful when run directly as well. First source your test venv.
  
<pre><nowiki>
+
<source lang="bash">
 
source .tox/py27/bin/activate
 
source .tox/py27/bin/activate
</nowiki></pre>
+
</source>
  
 
With the venv sourced we can run testr directly.
 
With the venv sourced we can run testr directly.
  
<pre><nowiki>
+
<source lang="bash">
 
testr help
 
testr help
  
 
testr run --parallel
 
testr run --parallel
 
testr run --parallel test_name_regex
 
testr run --parallel test_name_regex
</nowiki></pre>
+
</source>
  
 
After tests have run you can view the failing tests then run only that list of tests.
 
After tests have run you can view the failing tests then run only that list of tests.
  
<pre><nowiki>
+
<source lang="bash">
 
testr failing
 
testr failing
 
testr run --failing
 
testr run --failing
</nowiki></pre>
+
</source>
  
 
== Test Logs ==
 
== Test Logs ==
Line 58: Line 58:
 
One way to attempt to reproduce failures is to rerun them in the same order that they were run the first time they failed. To get this list of tests we first need the worker name for the subprocess that ran the failed test. Look in .testrepository/$LAST_TEST_RUN_NUMBER, find the test that failed, then under the tags section you should see something like:
 
One way to attempt to reproduce failures is to rerun them in the same order that they were run the first time they failed. To get this list of tests we first need the worker name for the subprocess that ran the failed test. Look in .testrepository/$LAST_TEST_RUN_NUMBER, find the test that failed, then under the tags section you should see something like:
  
<pre><nowiki>
+
<source lang="bash">
 
tags: worker-3
 
tags: worker-3
</nowiki></pre>
+
</source>
  
 
With this worker name we can extract the list of tests that ran in that test run on that worker.
 
With this worker name we can extract the list of tests that ran in that test run on that worker.
  
<pre><nowiki>
+
<source lang="bash">
 
source .tox/bin/py27/activate
 
source .tox/bin/py27/activate
 
testr last --subunit | subunit-filter -s --xfail --with-tag=worker-3 | subunit-ls > slave-3.list
 
testr last --subunit | subunit-filter -s --xfail --with-tag=worker-3 | subunit-ls > slave-3.list
</nowiki></pre>
+
</source>
  
 
Using this test list we can run that set of tests in the same order that caused the failure with:
 
Using this test list we can run that set of tests in the same order that caused the failure with:
  
<pre><nowiki>
+
<source lang="bash">
 
source .tox/bin/py27/activate
 
source .tox/bin/py27/activate
 
testr run --load-list=slave-3.list
 
testr run --load-list=slave-3.list
</nowiki></pre>
+
</source>
  
 
testr also comes with a really fancy automated test bisection feature that will try to determine the minimal set of tests required to reproduce failures that result when tests interfere with each other. To use this feature run
 
testr also comes with a really fancy automated test bisection feature that will try to determine the minimal set of tests required to reproduce failures that result when tests interfere with each other. To use this feature run
  
<pre><nowiki>
+
<source lang="bash">
 
source .tox/py27/bin/activiate
 
source .tox/py27/bin/activiate
 
testr run --analyze-isolation
 
testr run --analyze-isolation
</nowiki></pre>
+
</source>
  
 
after you have had a failed test run.
 
after you have had a failed test run.

Revision as of 22:23, 16 February 2013

testr

testr is a test runner runner and is part of testrepository. testrepository has excellent documentation and a very useful manual. This wiki page will try and condense much of the info found there. testr works by using your test runner to list all of your tests, it takes this list and partitions it into a number of partitions matching the number of CPUs available on the current machine, then forks that number test runners giving each one their own partition of the test list. The test runners are expected to speak subunit back to testr so that testr can keep track of test successes and failures along with other statistics.

TL;DR testr will run your tests in parallel (so they go faster) and it keeps robust logs of the results.

Using testr

Nova is the only OpenStack project to currently use testr (we ported the hard project first). To use testr with Nova you can use the existing wrappers, tox and run_tests.sh.

tox -epy27
# or
tox -epy26
# or
tox -ecover
# or
run_tests.sh

Will all run Nova's unittests with testr. You can specify which tests you want to run with a regex (the regex is over the full path of the test eg nova.tests.test_something.TestSomething.test_this_one_unit):

tox -epy27 -- test_name_regex
# or
run_tests.sh test_name_regex

testr can be quite useful when run directly as well. First source your test venv.

source .tox/py27/bin/activate

With the venv sourced we can run testr directly.

testr help

testr run --parallel
testr run --parallel test_name_regex

After tests have run you can view the failing tests then run only that list of tests.

testr failing
testr run --failing

Test Logs

You will find that testr keeps test logs in .testrepository/$TEST_RUN_NUMBER. These logs are subunit streams and are used to determine which tests are failing when you run testr failing. There are a couple neat consequences of this, you can pass these logs around to collaborate on a set of failing tests, it is really easy to write scripts that parse and transform the logs (see python-subunit and its subunit parser), and you can use the logs to help reproduce errors by using the log feed the testr test list fed to the test runners.

Reproducing Failures

One way to attempt to reproduce failures is to rerun them in the same order that they were run the first time they failed. To get this list of tests we first need the worker name for the subprocess that ran the failed test. Look in .testrepository/$LAST_TEST_RUN_NUMBER, find the test that failed, then under the tags section you should see something like:

tags: worker-3

With this worker name we can extract the list of tests that ran in that test run on that worker.

source .tox/bin/py27/activate
testr last --subunit | subunit-filter -s --xfail --with-tag=worker-3 | subunit-ls > slave-3.list

Using this test list we can run that set of tests in the same order that caused the failure with:

source .tox/bin/py27/activate
testr run --load-list=slave-3.list

testr also comes with a really fancy automated test bisection feature that will try to determine the minimal set of tests required to reproduce failures that result when tests interfere with each other. To use this feature run

source .tox/py27/bin/activiate
testr run --analyze-isolation

after you have had a failed test run.