Testr

= testr =

testr is a test runner and is part of testrepository. testrepository has excellent documentation and a very useful manual. This wiki page will try and condense much of the info found there. testr works by using your test runner to list all of your tests, it takes this list and partitions it into a number of partitions matching the number of CPUs available on the current machine, then forks that number test runners giving each one their own partition of the test list. The test runners are expected to speak subunit back to testr so that testr can keep track of test successes and failures along with other statistics.

TL;DR testr will run your tests in parallel (so they go faster) and it keeps robust logs of the results.

Dependency tree:


 * os-testr (OpenStack specific-wrapper, contains the "ostestr" command)
 * testrepository, contains the "testr" command
 * fixtures
 * subunit

= Using testr =

Nova, neutron, and keystone are three OpenStack projects that currently use testr (nova ported the hard project first). To see if a project uses testr, look at the commands setting in the [testenv] section of tox.ini at the root of the project.

To use testr you can use the existing wrappers, tox and run_tests.sh.

= Writing Tests for testr =

There are certain nose constructs that don't work well with testr. ((Please elaborate here.)) Will all run Nova's unittests with testr. You can specify which tests you want to run with a regex. Full regular expressions do not appear to be supported, but a trailing * wildcard works. The regex is over the full path of the test. For example, nova.tests.test_something.TestSomething.test_this_one_unit or "ceilosca.ceilometer.tests.unit.storage.*":

testr can be quite useful when run directly as well. First source your test venv.

With the venv sourced we can run testr directly.

After tests have run you can view the failing tests then run only that list of tests.

Test Logs
You will find that testr keeps test logs in .testrepository/$TEST_RUN_NUMBER. These logs are subunit streams and are used to determine which tests are failing when you run testr failing. There are a couple neat consequences of this, you can pass these logs around to collaborate on a set of failing tests, it is really easy to write scripts that parse and transform the logs (see python-subunit and its subunit parser), and you can use the logs to help reproduce errors by using the log feed the testr test list fed to the test runners.

Reproducing Failures
One way to attempt to reproduce failures is to rerun them in the same order that they were run the first time they failed. To get this list of tests we first need the worker name for the subprocess that ran the failed test. Look in .testrepository/$LAST_TEST_RUN_NUMBER (you might need to check .stestr instead of .testrepository), find the test that failed, then under the tags section you should see something like:

With this worker name we can extract the list of tests that ran in that test run on that worker.

Using this test list we can run that set of tests in the same order that caused the failure with:

testr also comes with a really fancy automated test bisection feature that will try to determine the minimal set of tests required to reproduce failures that result when tests interfere with each other. To use this feature run

after you have had a failed test run.

If the analyzer cannot determine the conflict, you might need to repeat these steps until the list of tests in the failure worker gets smaller and smaller until you can get a reasonably small set of tests to compare and see where a conflict might exist, e.g. common mocks on the same global variable.

= Debugging (pdb) Tests =

Debugging tests requires use of testtools.run. The bug listing here explains why direct pdb support does not work.

There is a simple process to get use of pdb within tests. First, generate a list of tests to run.

Note:  is the same as previously mentioned in the document. You may also edit the my-list file to narrow down the tests to be run. Now give the list as an option to testtools.run.

And wherever you set your pdb.set_trace will break into the debugger.

= FAQ=

How can I run just one test? Just some tests?
To limit the tests that are run, provide tox with a suitably narrow regex to limit test discovery:

To run one test:

How can I exit a test run after the first failure?
Sometimes you want to run all the tests but exit after the first failure (and not wait until then end). In projects that use subunit.run as the testrunner at the bottom of the stack this is possible by using testr directly and skipping tox:

In some environments (for example Ceilometer) it is necessary to establish the testing environment:

What do I do when I see "The test run didn't actually run any tests" in the output?
This can happen when an error is encountered during test listing, such as an import error.

To see a trace, list the tests: