I've been working on an ASP.NET MVC project for about 8 months now. For the most part I've been using TDD, some aspects were covered by unit tests only after I had written the actual code. In total the project pretty has good test coverage.
I'm quite pleased with the results so far. Refactoring really is much easier and my tests have helped me uncover quite a few bugs even before I ran my software the first time. Also, I have developed more sophisticated fakes and helpers to help me minimize the testing code.
However, what I don't really like is the fact that I frequently find myself having to update existing unit tests to account for refactorings I made to the software. Refactoring the software is now quick and painless, but refactoring my unit tests is quite boring and tedious. In fact the cost of maintaining my unit tests is higher than the cost of writing them in the first place.
I am wondering whether I might be doing something wrong or if this relation of cost of test development vs. test maintenance is normal. I've already tried to write as many tests as possible so that these cover my user stories instead of systematically covering my object's interface as suggested in this blog article.
Also, do you have any further tips on how to write TDD tests so that refactoring breaks as few tests as possible?
Edit: As Henning and tvanfosson correctly remarked, it's usually the setup part that is most expensive to write and maintain. Broken tests are (in my experience) usually a result of a refactoring to the domain model that is not compatible with the setup part of those tests.
How do I refactor unit tests? Quite simply, the same as you would any other code - in isolation. So, if you want to refactor your tests, don't change the code, just change your tests.
This is a well-known problem that can be addressed by writing tests according to best practices. These practices are described in the excellent xUnit Test Patterns. This book describes test smells that lead to unmaintanable tests, as well as provide guidance on how to write maintanable unit tests.
After having followed those patterns for a long time, I wrote AutoFixture which is an open source library that encapsulates a lot of those core patterns.
It works as a Test Data Builder, but can also be wired up to work as an Auto-Mocking container and do many other strange and wonderful things.
It helps a lot with regards to maintainance because it raises the abstraction level of writing a test considerably. Tests become a lot more declarative because you can state that you want an instance of a certain type instead of explicitly writing how it is created.
Imagine that you have a a class with this constructor signature
public MyClass(Foo foo, Bar bar, Sgryt sgryt)
As long as AutoFixture can resolve all the constructor arguments, you can simply create a new instance like this:
var sut = fixture.CreateAnonymous<MyClass>();
The major benefit is that if you decide to refactor the MyClass constructor, no tests break because AutoFixture will figure it out for you.
That's just a glimpse of what AutoFixture can do. It's a stand-alone library, so it will work with your unit testing framework of choice.
You might be writing your unit tests too close to your classes. What you should do is to test public APIs. When I mean public APIs, I don't mean public methods on all your classes, I mean your public controllers.
By having your tests mimicking how a user would interact with your controller part without ever touching your model classes or helper function directly, you allow yourself to refactor your code without having to refactor your tests. Of course, sometimes even your public API changes and then you'll still have to change your tests, but that will happen way less often.
The downside of this approach is that you'll often have to go through complex controller setup just to test a new tiny helper function you want to introduce, but I think that in the end, it's worth it. Moreover, you'll end up organizing your test code in a smarter way, making that setup code easier to write.
This article helped me a lot: http://msdn.microsoft.com/en-us/magazine/cc163665.aspx
On the other hand, there's no miracle method to avoid refactoring unit tests.
Everything comes with a price, and that's especially true if you want to do unit testing.
What I think he means is that it is the setup part that is quite tedious to maintain. We're having the exact same problem, especially when we introduce new dependecies, split dependecies, or otherwise change how the code is supposed to be used.
For the most part, when I write and maintain unit tests, I spend my time in writing the setup/arrange code. In many of our tests we have the exact same setup code, and we've sometimes used private helper methods to do the actual setup, but with different values.
However, that isn't a really good thing, because we still have to create all those values in every test. So, we are now looking into writing our tests in a more specification/BDD style, which should help to reduce the setup code, and therefore the amount of time spent in maintaining the tests. A few resources you can check out is http://elegantcode.com/2009/12/22/specifications/, and BDD style of testing with MSpec http://elegantcode.com/2009/07/05/mspec-take-2/
Most of the time I see such refactorings affecting the set up of the unit test, frequently involving adding dependencies or changing expectations on these dependencies. These dependencies may be introduced by later features but affect earlier tests. In these cases I've found it to be very useful to refactor the set up code so that it is shared by multiple tests (parameterized so that it can be flexibly configured). Then when I need to make a change for a new feature that affects the set up, I only need to refactor the tests in a single place.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With