Limitations of Unit Testing Unit testing cannot detect integration or interfacing issues between two modules. It cannot catch complex errors in the system ranging from multiple modules. It cannot test non-functional attributes like usability, scalability, the overall performance of the system, etc.
Unit testing ensures that all code meets quality standards before it's deployed. This ensures a reliable engineering environment where quality is paramount. Over the course of the product development life cycle, unit testing saves time and money, and helps developers write better code, more efficiently.
But, being a white box software testing technique, unit testing is an in-depth process that requires a lot of knowledge about the code and how it interacts with other parts of the project. As a result, it can be hard to start writing unit tests, especially if you are new to the codebase.
Yes. This is a link to a study by Boby George and Laurie Williams at NCST and a another by Nagappan et al. I'm sure there are more. Dr. Williams publications on testing may provide a good starting point for finding them.
[EDIT] The two papers above specifically reference TDD and show 15-35% increase in initial development time after adopting TDD, but a 40-90% decrease in pre-release defects. If you can't get at the full text versions, I suggest using Google Scholar to see if you can find a publicly available version.
" I have to convice the other programmers and, more importantly, the bean-counters in management, that all the extra time spent learning the testing framework, writing tests, keeping them updated, etc.. will pay for itself, and then some."
Why?
Why not just do it, quietly and discretely. You don't have to do it all at once. You can do this in little tiny pieces.
The framework learning takes very little time.
Writing one test, just one, takes very little time.
Without unit testing, all you have is some confidence in your software. With one unit test, you still have your confidence, plus proof that at least one test passes.
That's all it takes. No one needs to know you're doing it. Just do it.
I take a different approach to this:
What assurance do you have that your code is correct? Or that it doesn't break assumption X when someone on your team changes func1()? Without unit tests keeping you 'honest', I'm not sure you have much assurance.
The notion of keeping tests updated is interesting. The tests themselves don't often have to change. I've got 3x the test code compared to the production code, and the test code has been changed very little. It is, however, what lets me sleep well at night and the thing that allows me to tell the customer that I have confidence that I can implement the Y functionality without breaking the system.
Perhaps in academia there is evidence, but I've never worked anywhere in the commercial world where anyone would pay for such a test. I can tell you, however, that it has worked well for me, took little time to get accustomed to the testing framework and writing test made me really think about my requirements and the design, far more than I ever did when working on teams that wrote no tests.
Here's where it pays for itself: 1) You have confidence in your code and 2) You catch problems earlier than you would otherwise. You don't have the QA guy say "hey, you didn't bother bounds-checking the xyz() function, did you? He doesn't get to find that bug because you found it a month ago. That is good for him, good for you, good for the company and good for the customer.
Clearly this is anecdotal, but it has worked wonders for me. Not sure I can provide you with spreadsheets, but my customer is happy and that is the end goal.
We've demonstrated with hard evidence that it's possible to write crappy software without Unit Testing. I believe there's even evidence for crappy software with Unit Testing. But this is not the point.
Unit Testing or Test Driven Development (TDD) is a Design technique, not a test technique. Code that's written test driven looks completely different from code that is not.
Even though this is not your question, I wonder if it's really the easiest way to go down the road and answer questions (and bring evidence that might be challenged by other reports) that might be asked wrong. Even if you find hard evidence for your case - somebody else might find hard evidence against.
Is it the business of the bean counters to determine how the technical people should work? Are they providing the cheapest tools in all cases because they believe you don't need more expensive ones?
This argument is either won based on trust (one of the fundamental values of agile teams) or lost based on role power of the winning party. Even if the TDD-proponents win based on role power I'd count it as lost.
If you are also interested in evidence against unit testing here is one well researched and thought out article:
Why Most Unit Testing is Waste By James O Coplien (lean and agile guru)
More about TDD than strictly unit testing, here is a link to the Realizing quality improvement through test driven development: results and experiences of four industrial teams paper, by Nagappan, E. Michael Maximilien, Thirumalesh Bhat, and Laurie Williams. paper published by Microsoft Empirical Software Engineering and Measurement (ESM) group and already mentionned here.
The team found was that the TDD teams produced code that is between 60% and 90% percent better (in terms of defect density) than non-TDD teams. However TDD teams took between 15% and 35% longer to complete their projects.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With