I'm about to start looking into developing with the aid of code coverage, and I'm wondering how it typically fits in with test driven development.
Is code coverage an afterthought? Does your process go something like
Or do you run code coverage at the very end after numerous functional pieces have been implemented and then go back and work towards 100% coverage?
The third option I can think of is strive towards 100% coverage before even implementing the functionality.
Which of these is most common, and what are the benefits?
You don't write tests until 100% code coverage is achieved. If you've been following TDD, then there is no code that was ever written without being required by a test, so you should always be near 100% coverage.
Instead, you write tests until all tests pass, and until all the tests required have been written. This will imply that all the required code has been written, since you will only have written code if it was required by a test.
With TDD you should almost always be near 100% coverage when developing new code since you don't develop any code that you don't need to pass tests. Only when you think the code is so simple that you don't need a test (say, like an automatic property in C#), should you have code that isn't specifically covered. You can, through refactoring, sometimes introduce blocks that aren't necessary or change the code in unexpected ways so you may want to use coverage at that point to ensure that you haven't accidentally introduced untested code. Other than that, I'd say that I use it more as a sanity check and do coverage analysis periodically for the same reasons. It can also be very useful when your discipline breaks down and you've neglected to work in a TDD manner.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With