I've recently gotten into a team that heavily utilizes unit testing. Nobody can explain to me why this form of testing is so important, but they treat it like law.
I understand that the idea of automated testing is to prevent regression, but I don't see how that could be a problem in the first place. Modular, object-oriented, concise code that is well-commented doesn't have a problem with regression. If you build it right the first time, and design for the inevitable slough of feature adds that happen in the future, you'll never need tests.
And further, isn't that what graceful error handling and logging is supposed to accomplish? Why spend weeks hashing out assert statements and unit tests when you can just ensure that all your external dependencies double-check their availability first?
Am I being arrogant in coming to the conclusion that unit testing is a crutch for "bad" codebases which are flawed and built poorly?
This is a serious question. I can't find any good answers anywhere, and everyone I ask seems to think I'm being a troll if I question the purpose of automated testing.
EDIT: Thanks for the answers, I think I'm understanding it now. I see a few people voted to delete, but I'd like to thank the people that answered; it really did help!
No one is perfect - you make mistakes eventually. Unit testing is designed to catch and pinpoint the location of a mistake, in order to:
Error handling and logging only helps when the bug is triggered; unit testing is what makes bugs be triggered in testing rather than production.
You have a piece of software with 3 different parts, each of which has 2 different options.
A C E / \ / \ / \ in-< >--< >--< >-out \ / \ / \ / B D F
You could test this by manually putting in inputs and checking outputs - first you'd put in some inputs that triggered A,C,E; then you'd put in some that did A,C,F, and so on, until you covered everything through B,D,F.
But keep in mind that B, D, and F each have their own individual parameters and flows that need to be tested - we'll say there's maybe 10 variations for each. So that's at least 10*10*10 = 1000
different inputs you need to check, just for the A,C,E case. There's 8 different possible flows through these 6 components, so that's 8000 different combinations of inputs you need to check just to make sure you hit all of the different possible inputs.
On the other hand, you could unit test. If you clearly define the unit boundaries for each component, then you can write 10 unit tests for A, 10 for B, and so on, testing those boundaries. That gives you a total of 60 unit tests for the components, plus a handful (say 5 per flow, so 40) integration tests that make sure all of the components are tied together properly. That's a total of 100 tests which accomplish effectively the same coverage of functionality.
By using unit testing, you've reduced the amount of testing required to get an equivalent amount of coverage by a factor of about 80x! And that's for a relatively trivial system. Now consider more complex software where the number of components is almost certainly greater than 6, and the number of possible cases those components handle is almost certainly greater than 10. The savings you get from unit testing rather than just integration testing keep building.
Short answer: yes, you're arrogant. ;)
Assuming you truly are perfect, your code is not only correct and flawless when you write it, but it also takes into account all future requirements that will be placed upon it.
Now.... How do you know that your code is perfect and correct? You need to test it. If it hasn't been tested, you can't trust that it works.
It's not just about regressions (since that implies that the code used to work. What if it never worked? It was buggy when it was first written)
I understand that the idea of automated testing is to prevent regression, but i don't see how that could be a problem in the first place. Modular, object-oriented, concise code that is well-commented doesn't have a problem with regression.
Who told you that? That person ought to be flogged. Object-oriented code is just as error-prone as anything else. There's nothing magical about it, it's no silver bullet. At the end of the day, whenever you change a piece of code, there's a chance that you break something, somewhere. The chance might be larger or smaller depending on the code in question, but no matter how good the code, you can't be sure that you haven't introduced a regression unless you test it.
If you build it right the first time, and design for the inevitable slough of feature adds that happen in the future, you'll never need tests.
How do you build it right the first time, though? As I said above, to do so, you need to have tests, to show you that the code works. But more importantly, how do you "design for" features that will be added in the future? You don't even know what they are yet.
And further, isn't that what graceful error handling and logging is supposed to accomplish? Why spend weeks hashing out assert statements and unit tests when you can just ensure that all your external dependencies double-check their availability first?
No, not at all.
Your code should certainly handle error conditions and it should certainly log what you need logged.
But you still need to know that it does all this correctly. And you need to know that it handles the non-error conditions correctly too! It's great to know that "if the SQL server is unavailable, we show a nice error message to the user and exit". But what if it is available? Does your application work then?
For any nontrivial application, there are a lot of things that can go wrong. There's a lot of functionality, a lot of code, and a lot of different execution paths.
Trying to test it manually is never going to exercise all these code paths. It's never going to get around to testing every aspect of every feature in every context. And even if it did, that just tells you that "the code worked today". Will it work tomorrow? How can you know? Sure, your gut feeling might tell you that "the code I committed since then hasn't broken anything", but how do you know that? You need to test it again. And again. And again.
You ask if unit tests are a crutch for bad code bases. They're the opposite. They're the check-ups, the doctor visits, that prevent code bases from going bad. They don't just tell you whether or not your code works, but when it works, and more importantly, when it stops working. You don't think you're going to introduce any regressions? How sure are you? Can you afford to be wrong?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With