I want implement a method which tells me if the coordinates (x and y) are out of bounds. How many tests should I write? To me it seems to be 5:
Am I creating redundant tests and should I only have 1 test for each method I want to implement?
This isn't usually the way we think about it in TDD. It's more: "what test do I need next?" So, typically, I'd start with (pseudocode)
given: bounds (5, 10, 15, 20)
assert: outOfBounds(0, 0)
and make that pass with
outOfBounds(x, y): return true
But I know that's not real yet, so I know I need another test.
assert: !outOfBounds(5, 10)
So now that fails. What's the simplest thing that could possibly work? Maybe
outOfBounds(x, y): return x == 0
Of course I know I'm still faking it, so I need another test. This keeps going 'til I'm not faking it any more. Maybe, in this case, I'd wind up with the same 5 cases you do with your "how many tests" question - but maybe I'll realize I'm done a little sooner than that.
A better question is: Do I need another test?
You need to write sufficient tests to cover off the behaviour you expect to see from your method - no more, no less.
Indeed, if you're practising TDD (as the title suggests) then the behaviour of your method should have been driven out by the tests you wrote, rather than the other way around - so you will already have found the optimal number of tests for the functionality you've written to make them pass. (Though it's common to think of edge cases and failure cases after having driven out the happy-path functionality, which I guess is what's happened here?)
For this specific case, the five tests you've described here sound perfectly sensible to me.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With