Roy Osherove has an article discussing ways to rollback data used in unit testing. It is an interesting piece and Roy has done some good work in this area, but I worry that Roy is straying away from the point:
Unit tests should not talk to external resources, such as databases. Michael Feathers has a fairly good summary of the rules for tests (linked here). Rule One is that a test is not a unit test if it talks to the database.
As Michael Feathers points out: Tests that do these things aren’t bad. Often they are worth writing, and they can be written in a unit test harness. However, it is important to be able to separate them from true unit tests so that we can keep a set of tests that we can run fast whenever we make our changes.
Kent Beck talks about this under Mock Objects in Test-Driven Development where he states the solution is not to use a real database most of the time. Using a mock object is the solution given there, but beware the naive approach of just using NMock2. The difficulty here is that you can end up with Fragile Tests because you are testing the interaction with the external component. See Martin Fowler for more on this. You may instead be better off with a Fake Object, whose state you can then query after. We have definitely begun to suspect that we might have been better off avoiding use of NMock to fake our databases in a couple of cases, as the tests may prove fragile (though there is no real pain yet). William Caputo has some good articles on testing across boundaries that look in detail at the ideas behind a fake database.
Among other things the problem with database tests is that they don’t push you toward de-coupling you data access layer in the way that the inversion of control of using a Test Double approaches. So you miss (im)proving the design by testing when you adopt the rollback approach. My personal experience has been that the Data Access layer has always benefited from the insights we gained this way. Its not about purism, its about the purpose of TDD. Though I confess that it is better that someone is testing and talking to the DB than not testing at all, it is important people understand what the alternative is.
We tend to use two test assemblies in a project. One is the unit test dll. It represents the tests that need to run fast all the time. The other is the integration test dll where we put all the tests that break Michael’s rules – the ones that talk to external resources. Note that we often have an overlap. A unit test will prove our design, and an integration test shows it all works when hooked together. That is no bad thing. These are both points of failure and it is good to automate them. Many of the techniques that Roy describes are valuable when dealing with these integration tests (or even acceptance tests driven by tools like FIT, but they should be separate from you unit tests).
BTW once you are in .NET 2.0 you can use System.Transactions to provide the nested transactions you need for rollback of your unit tests, and there is no longer any need to use Enterprise Services to do the work. This has the advantage that it may not require an Ole Transaction.