Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is 100% code coverage a really good thing when doing unit tests? [closed]

I always learned that doing maximum code coverage with unit tests is good. I also hear developers from big companies such as Microsoft saying that they write more lines of testing code than the executable code itself.

Now, is it really great? Doesn't it seem sometimes like a complete loss of time which has an only effect to making maintenance more difficult?

For example, let's say I have a method DisplayBooks() which populates a list of books from a database. The product requirements tell that if there are more than one hundred books in the store, only one hundred must be displayed.

So, with TDD,

  1. I will start by making an unit test BooksLimit() which will save two hundred books in the database, call DisplayBooks(), and do an Assert.AreEqual(100, DisplayedBooks.Count).
  2. Then I will test if it fails,
  3. Then I'll change DisplayBooks() by setting the limit of results to 100, and
  4. Finally I will rerun the test to see if it succeeds.

Well, isn't it much more easier to go directly to the third step, and do never make BooksLimit() unit test at all? And isn't it more Agile, when requirements will change from 100 to 200 books limit, to change only one character, instead of changing tests, running tests to check if it fails, changing code and running tests again to check if it succeeds?

Note: lets assume that the code is fully documented. Otherwise, some may say, and they would be right, that doing full unit tests will help to understand code which lacks documentation. In fact, having a BooksLimit() unit test will show very clearly that there is a maximum number of books to display, and that this maximum number is 100. Stepping into the non-unit-tests code would be much more difficult, since such limit may be implemented though for (int bookIndex = 0; bookIndex < 100; ... or foreach ... if (count >= 100) break;.

like image 959
Arseni Mourzenko Avatar asked Jun 26 '10 11:06

Arseni Mourzenko


People also ask

Is having 100% code coverage a good thing?

Good coverage does not imply good tests This is the most common point from 100%-coverage detractors: a covered code does not mean it's well tested. It is better to have 50% of the project tested correctly, than 100% poorly tested. This is correct.

What percentage of code coverage is good?

With that being said it is generally accepted that 80% coverage is a good goal to aim for. Trying to reach a higher coverage might turn out to be costly, while not necessary producing enough benefit. The first time you run your coverage tool you might find that you have a fairly low percentage of coverage.

How much code should unit tests cover?

Aim for 95% or higher coverage with unit tests for new application code. When developers unit test as they program, they improve the longevity and quality of the codebase. The time a development team invests in unit tests pays off with less time spent troubleshooting defects and analyzing problems later.


2 Answers

Well, isn't it much more easier to go directly to the third step, and do never make BooksLimit() unit test at all?

Yes... If you don't spend any time writing tests, you'll spend less time writing tests. Your project might take longer overall, because you'll spend a lot of time debugging, but maybe that's easier to explain to your manager? If that's the case... get a new job! Testing is crucial to improving your confidence in your software.

Unittesting gives the most value when you have a lot of code. It's easy to debug a simple homework assignment using a few classes without unittesting. Once you get out in the world, and you're working in codebases of millions of lines - you're gonna need it. You simply can't single step your debugger through everything. You simply can't understand everything. You need to know that the classes you're depending on work. You need to know if someone says "I'm just gonna make this change to the behavior... because I need it", but they've forgotten that there's two hundred other uses that depend on that behavior. Unittesting helps prevent that.

With regard to making maintenance harder: NO WAY! I can't capitalize that enough.

If you're the only person that ever worked on your project, then yes, you might think that. But that's crazy talk! Try to get up to speed on a 30k line project without unittests. Try to add features that require significant changes to code without unittests. There's no confidence that you're not breaking implicit assumptions made by the other engineers. For a maintainer (or new developer on an existing project) unittests are key. I've leaned on unittests for documentation, for behavior, for assumptions, for telling me when I've broken something (that I thought was unrelated). Sometimes a poorly written API has poorly written tests and can be a nightmare to change, because the tests suck up all your time. Eventually you're going to want to refactor this code and fix that, but your users will thank you for that too - your API will be far easier to use because of it.

A note on coverage:

To me, it's not about 100% test coverage. 100% coverage doesn't find all the bugs, consider a function with two if statements:

// Will return a number less than or equal to 3 int Bar(bool cond1, bool cond2) {   int b;   if (cond1) {     b++;   } else {     b+=2;   }    if (cond2) {     b+=2;   } else {     b++;   } } 

Now consider I write a test that tests:

EXPECT_EQ(3, Bar(true, true)); EXPECT_EQ(3, Bar(false, false)); 

That's 100% coverage. That's also a function that doesn't meet the contract - Bar(false, true); fails, because it returns 4. So "complete coverage" is not the end goal.

Honestly, I would skip tests for BooksLimit(). It returns a constant, so it probably isn't worth the time to write them (and it should be tested when writing DisplayBooks()). I might be sad when someone decides to (incorrectly) calculate that limit from the shelf size, and it no longer satisfies our requirements. I've been burned by "not worth testing" before. Last year I wrote some code that I said to my coworker: "This class is mostly data, it doesn't need to be tested". It had a method. It had a bug. It went to production. It paged us in the middle of the night. I felt stupid. So I wrote the tests. And then I pondered long and hard about what code constitutes "not worth testing". There isn't much.

So, yes, you can skip some tests. 100% test coverage is great, but it doesn't magically mean your software is perfect. It all comes down to confidence in the face of change.

If I put class A, class B and class C together, and I find something that doesn't work, do I want to spend time debugging all three? No. I want to know that A and B already met their contracts (via unittests) and my new code in class C is probably broken. So I unittest it. How do I even know it's broken, if I don't unittest? By clicking some buttons and trying the new code? That's good, but not sufficient. Once your program scales up, it'll be impossible to rerun all your manual tests to check that everything works right. That's why people who unittest usually automate running their tests too. Tell me "Pass" or "Fail", don't tell me "the output is ...".

OK, gonna go write some more tests...

like image 111
Stephen Avatar answered Sep 20 '22 10:09

Stephen


100% unit test coverage is generally a code smell, a sign that someone has come over all OCD over the green bar in the coverage tool, instead of doing something more useful.

Somewhere around 85% is the sweet spot, where a test failing more often that not indicates an actual or potential problem, rather than simply being an inevitable consequence of any textual change not inside comment markers. You are not documenting any useful assumptions about the code if your assumptions are 'the code is what it is, and if it was in any way different it would be something else'. That's a problem solved by a comment-aware checksum tool, not a unit test.

I wish there was some tool that would let you specify the target coverage. And then if you accidentally go over it, show things in yellow/orange/red to push you towards deleting some of the spurious extra tests.

like image 45
soru Avatar answered Sep 20 '22 10:09

soru