Jump to: navigation, search

TestGuide

Revision as of 00:00, 1 January 1970 by (talk)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

WARNING: This document is still in early draft stages!

Why Write Automated Tests?

Often when writing tests, our main goal is to verify that the piece of code we just wrote works the way we want it to. However, tests serve a large number of important purposes beyond that scope. First, tests can help us design our code, especially if written first as in TDD. Second, tests serve as useful documentation of how code works.

Crucially, the tests we write will be run many many times by other people. Indeed, their primary use is that good test coverage enables other developers to make changes at a very fast pace with the confidence that they are not breaking (?)

Assumptions

  • familiar with python unittest (link?)

Types of Tests

There are many different types of tests. You will very often hear developers use terminology like unit tests, integration tests, system tests, black box tests, functional tests, characterization tests, and more. Often, the same term means something different when used by different people. To avoid semantic arguments, this guide only distinguishes three types of tests: small, medium and large tests. The hope is that these terms are general enough to mean something similar to everyone.

Small Tests

Most developers in their day to day work will be reading, writing, and running small tests. Additionally, medium and large tests are in part defined by what small tests can not be. Therefore, small tests are the main focus of this document.

What makes a test small? Small tests

  • are extremely isolated
  • run extremely fast

Let's break down each of these attributes in turn.

Isolation

Having isolated tests mean it is easy to find the code that is causing a bug. Consider the following bad example:

EXAMPLE 1: UNISOLATED TEST (pulled from nova code?)

If this test fails, where is the problem? If instead, this test were rewritten as the following set of small tests, we would know precisely which methods were buggy when a test failed.

EXAMPLE 2: ISOLATED FORM OF EXAMPLE 1, AVOIDING NEED FOR FAKES

Another way to help isolate a test is to focus the assertions in that test around a single concept. If the test in the following example failed, we would have to look at the exact line number of the failure to know what went wrong.

EXAMPLE 3: Too many asserts

If instead the test were broken down as follows, merely reading the name of the method that failed would give us a good idea of what went wrong.

EXAMPLE 4: Rewrite of example 3 with one concept per test

Fake Objects

Frequently it is impossible to get an object on its own, because it depends on the behavior of another object. Consider the following code.

EXAMPLE 5: Code of an object aggregating another object

How can we test the <aggregator> without also testing the <aggregated>? In situations like this, it is frequently helpful to make use of fake objects. A fake object is an object that has the same interface as the dependency we are trying to avoid instantiating. There are two general ways to inject fake objects as dependencies--dependency injection and stubbing. Stubbing in a fake object makes use of the features of dynamic languages by overwriting the namespace that the object we want to test will use to instantiate the dependency.

EXAMPLE 6: Test isolation through stubbing

As you can see, the <object we are testing> thinks it is getting the usual <dependency object>, but instead it is getting the fake object we created.

Dependency injection is very similar but uses more traditional approaches to providing the fake object.

EXAMPLE 7: Test isolation through DI

Style Guidance

References