testr is a test runner and is part of testrepository. testrepository has excellent documentation and a very useful manual. This wiki page will try and condense much of the info found there. testr works by using your test runner to list all of your tests, it takes this list and partitions it into a number of partitions matching the number of CPUs available on the current machine, then forks that number test runners giving each one their own partition of the test list. The test runners are expected to speak subunit back to testr so that testr can keep track of test successes and failures along with other statistics.
TL;DR testr will run your tests in parallel (so they go faster) and it keeps robust logs of the results.
Nova, neutron, and keystone are three OpenStack projects that currently use testr (nova ported the hard project first). To see if a project uses testr, look at the commands setting in the [testenv] section of tox.ini at the root of the project.
To use testr you can use the existing wrappers, tox and run_tests.sh.
tox -epy27 # or tox -epy26 # or tox -ecover # or run_tests.sh
Writing Tests for testr
There are certain nose constructs that don't work well with testr. Please elaborate here. Will all run Nova's unittests with testr. You can specify which tests you want to run with a regex (the regex is over the full path of the test eg nova.tests.test_something.TestSomething.test_this_one_unit):
tox -epy27 -- test_name_regex # or run_tests.sh test_name_regex
testr can be quite useful when run directly as well. First source your test venv.
With the venv sourced we can run testr directly.
testr help testr run --parallel testr run --parallel test_name_regex
After tests have run you can view the failing tests then run only that list of tests.
testr failing testr run --failing
You will find that testr keeps test logs in .testrepository/$TEST_RUN_NUMBER. These logs are subunit streams and are used to determine which tests are failing when you run testr failing. There are a couple neat consequences of this, you can pass these logs around to collaborate on a set of failing tests, it is really easy to write scripts that parse and transform the logs (see python-subunit and its subunit parser), and you can use the logs to help reproduce errors by using the log feed the testr test list fed to the test runners.
One way to attempt to reproduce failures is to rerun them in the same order that they were run the first time they failed. To get this list of tests we first need the worker name for the subprocess that ran the failed test. Look in .testrepository/$LAST_TEST_RUN_NUMBER, find the test that failed, then under the tags section you should see something like:
With this worker name we can extract the list of tests that ran in that test run on that worker.
source .tox/py27/bin/activate testr last --subunit | subunit-filter -s --xfail --with-tag=worker-3 | subunit-ls > slave-3.list
Using this test list we can run that set of tests in the same order that caused the failure with:
source .tox/py27/bin/activate testr run --load-list=slave-3.list
testr also comes with a really fancy automated test bisection feature that will try to determine the minimal set of tests required to reproduce failures that result when tests interfere with each other. To use this feature run
source .tox/py27/bin/activiate testr run --analyze-isolation
after you have had a failed test run.
Debugging (pdb) Tests
debugging tests requires use of testtools.run. the bug listing here explains why direct pdb support does not work. There is a simple process to get use of pdb within tests. First, generate a list of tests to run
testr list-tests test_name_regex > my-list
Note: test_name_regex is the same as previously mentioned in the document Now pipe it through testtools.run
python -m testtools.run discover --load-list my-list
And wherever you set your pdb.set_trace() will break into the debugger.