Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to use unit tests in projects with many levels of indirection

I was looking over a fairly modern project created with a big emphasis on unit testing. In accordance with old adage "every problem in object oriented programming can be solved by introducing new layer of indirection" this project was sporting multiple layers of indirection. The side-effect was that fair amount of code looked like following:

public bool IsOverdraft)
{
    balanceProvider.IsOverdraft();
}

Now, because of the empahsis on unit testing and maintaining high code coverage, every piece of code had unit tests written against it.Therefore this little method would have three unit tests present. Those would check:

  1. If balanceProvider.IsOverdraft() returns true then IsOverdraft should return true
  2. If balanceProvider.IsOverdraft() returns false then IsOverdraft should return false
  3. If balanceProvider throws an exception then IsOverdraft should rethrow the same exception

To make things worse, the mocking framework used (NMock2) accepted method names as string literals, as follows:

NMock2.Expect.Once.On(mockBalanceProvider)
    .Method("IsOverdraft")
    .Will(NMock2.Return.Value(false));

That obviously made "red, green, refactor" rule into "red, green, refactor, rename in test, rename in test, rename in test". Using differnt mocking framework like Moq, would help with refactoring, but it would require a sweep trough all existing unit tests.

What is the ideal way to handle this situation?

A) Keep smaller levels of layers, so that those forwarding calls do not happen anymore.

B) Do not test those forwarding methods, as they do not contain business logic. For purposes of coverage marked them all with ExcludeFromCodeCoverage attribute.

C) Test only if proper method is invoked, without checking return values, exceptions, etc.

D) Suck it up, and keep writing those tests ;)

like image 777
Sebastian K Avatar asked Jan 19 '13 23:01

Sebastian K


2 Answers

Either B or C. That's the problem with such general requirements ("every method must have unit test, every line of code needs to be covered") - sometimes, benefit they provide is not worth the cost. If it's something you came up with, I suggest rethinking this approach. The "we must have 95% code coverage" might be appealing on paper but in practice it quickly spawns problems like the one you have.

Also, the code you're testing is something I'd call trivial code. Having 3 tests for it is most likely overkill. For that single line of code, you'll have to maintain like 40 more. Unless your software is mission critical (which might explain high-coverage requirement), I'd skip those tests.

One of the (IMHO) most pragmatic advices on this topic was provided by Kent Beck some time ago on this very site and I expanded a bit on those thoughts with in my blog posts - What should you test?

like image 119
k.m Avatar answered Oct 21 '22 07:10

k.m


Honestly, I think we should write tests only to document our code in an helpful manner. We should not write tests just for the sake of code coverage. (Code coverage is just a great tool to figure out what it is NOT covered so that we can figure out if we did forget important unit tests cases or if we actually have some dead code somewhere).

If I write a test, but the test ends up just being a "duplication" of the implementation or worse...if it's harder to understand the test than the actual implementation....then really such a test should not exists. Nobody is interested in reading such tests. Tests should not contain implementation details. Test are about "what" should happen not "how" it will be done. Since you've tagged your question with "TDD", I would add that TDD is a design practice. So if I already know 100% sure in advance what will be the design of what i'm going to implement, then there is no point for me to use TDD and write unit tests (But I will always have in all cases a high level acceptance test that will cover that code). That will happen often when the thing to design is really simple, like in your example. TDD is not about testing and code coverage, but really about helping us to design our code and document our code. There is no point to use a design tool or a documentation tool for designing/documenting simple/obvious things.

In your example, it's far easier to understand what's going on by reading directly the implementation than the test. The test doesn't add any value in term of documentation. So I'd happily erase it.

On top of that such tests are horridly brittle, because they are tightly coupled to the implementation. That's a nightmare on the long term when you need to refactor stuff since any time you will want to change the implementation they will break.

What I'd suggest to do, is to not write such tests but instead have higher level component tests or fast integration tests/acceptance tests that would exercise these layers without knowing anything at all about the inner working.

like image 31
foobarcode Avatar answered Oct 21 '22 07:10

foobarcode