In an effort to create clarity around expectations I recently put together a document outlining my expectations for TDD. I wanted to get feedback from the team and their buy in as I was concerned that we were not always on the same page with what we felt TDD best practice was. Having a guidline document should help us manage expectations and give us a focus for debate. For those who may be interested I thought I would share those guidelines with you:
A test is not a unit test if:
- It talks to the database
- It communicates across the network
- It touches the file system
- It can’t run correctly at the same time as any of your other unit tests
- You have to do special things to your environment (such as editing config files) to run it.
An integration test is one that violates these rules. It is still useful but should run in a separate assembly and only needs to be run before check-in and at integration.
Integration tests are also needed where we have test doubles (fancy term for fakes, stubs, or mocks) to ensure that the code really works together and for end-to-end acceptance tests.
Tests one operation on one class.
1: We write tests first then implement them
2: We follow the approach: red, green, refactor
2a: Red – write a failing test
2b: Green – make the test pass by the simplest method possible
2c: Refactor if necessary. The key smell to drive refactoring is duplication. This is the point we may chose to implement a pattern, not before.
2d: Our test should be short about 20 minutes at most to implement. If it is longer we may be trying to do too much and we might need to break the operation under test up to be implemented by multiple classes. Working on a test for two or three hours is a smell that your design is not granular enough. End up with facades to simplify, don’t start writing your tests at one. Work bottom up and not top down when unit testing.
2e: We always check code coverage before we check in to look for untested code. Untested code may need a supporting test, or might be speculative code. Remove speculative code – don’t try to provide the implementation before you have the tests.
2f: Do not rely on tests in other units for your code coverage. Don’t be afraid to run the same code path twice from different tests.
2f: The test should test one operation on the class under test.
2g: Follow the usual patterns around testing at boundaries when testing an operation. You would expect more than one test per operation.
2h: Triangulate – write incremental tests that build up functionality instead of one test that tries to define everything an operation needs to do.
3: We would like to have on test fixture for each class within our system
3a: Where a class has no operations to test we want to put in an empty class fixture to document why that class has no tests.
3b: A fixture would tend to have more tests than there are operations on the original class.
4: Prefer state based testing to interaction based testing, stubs or fakes to expectations
4a: Where dependencies are low-complexity classes and are not part of a chain then instantiate and pass in directly. At the naïve level this is ‘do not stub a string or int but do stub out the database’. In between the breakpoints are more vague, but look to the complexity of the test as a guide.
4b: Stub complex classes or when working at layer or assembly boundaries.
4c: Consider a fake (do little implementation of a class or override of base class) instead of a stub.
4d: Interaction based testing can be a big source of fragile tests – tests that break when we make changes – which may cause frustration with tests rather than acceptance.
To understand the basics the following books are the best sources:
Kent Beck, Test Driven Development (ignore the amazon reviews the original is one of the best books on the topic)
Martin Fowler, Refactoring
Kerievsky, Refactoring to Patterns
TDD is an evolving practice. Good sources for current thinking are: