I´m starting (trying at least) to do coding using TDD principles and I have this question: how much tests do I need to write before actually start coding?
Take for example a hypothetically Math
class and a method Divide(int a, int b)
.
a) Do I have to fully test all methods of Math
class (Sum
, Average
, ...) before start coding Math
?
b) Do I have to fully test the Divide
method, asserting for example for division by zero, before start coding the method?
c) Or I can create a simple test assertion and verify that it fails, write the code and check that it´s OK, reapeating the process for each of the assertions of a method?
I think the option c) is the correct, but I couldn´t find an answer to it (I did some searchs but couldn´t find a definitive answer).
It isn't realistic -- or necessary -- to expect 100% code coverage through unit tests. The unit tests you create depend on business needs and the application or applications' complexity. Aim for 95% or higher coverage with unit tests for new application code.
It often makes sense to write the test first and then write as much code as needed to allow the test to pass. Doing this moves towards a practice known as Test-Driven Development (TDD). Bluefruit uses a lot of TDD because it helps us to build the right product without waste and redundancies.
Generally, this means a percentage of the team's work week or something. From now on, spend 90% of your time writing code and 10% working on unit tests. One or more of these things, they reason, will ensure that the team does “enough” unit testing.
Every behavior should be covered by a unit test, but every method doesn't need its own unit test. Many developers don't test get and set methods, because a method that does nothing but get or set an attribute value is so simple that it is considered immune to failure.
Your option c represents fully by the book TDD.
You write one failing test exercising a feature of the class that you are working on and then write only enough code to make that test pass. Then you do this again, for the next test.
By doing it this way you should then see each new piece of code you write being very focused upon a particular use-case/test and also find that your tests remain distinct in what they cover.
You want to end up working in a red-green-refactor fashion, so that periodically you go back over both your code and your tests for places where you can refactor things into a better design.
Of course, in the real world you may end up writing many red tests, or writing more code than a particular test requires, or even writing code without tests, but that is moving away from TDD and should only be done with caution.
The wikipedia article on this is actually quite good. http://en.wikipedia.org/wiki/Test-driven_development
The first thing you want to do is write a specification for each method you want to implement. In your specification, you need to address as many corner cases as you care about, and define the behavior your method should result in when executing those cases.
Once your specification is complete, you design tests for every part of your specification ensuring that each test is not passing or failing due to corner case conditions. At this point you are ready to code up your function implementation and tests. Once this is complete, you refine your specification/tests/implementation as necessary until the results are exactly what you desire from your implementation.
Then you document everything (particularly your reasoning for handling corner cases).
Like others have mentioned, your option c would be the pure TDD way to do this. The idea is to build your code up in small red-green-refactor increments. A good simple example of this is Robert Martin's Bowling Kata.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With