Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Are there situations where unit tests are detrimental to code?

Tags:

unit-testing

Most of the discussion on this site is very positive about unit testing. I'm a fan of unit testing myself. However, I've found extensive unit testing brings its own challenges. For example, unit tests are often closely coupled to the code they test, which can make API changes increasingly costly as the volume of tests grows.

Have you found real-world situations where unit tests have been detrimental to code quality or time to delivery? How have you dealt with these situations? Are there any 'best practices' which can be applied to the design and implementation of unit tests?

There is a somewhat related question here: Why didn't unit testing work out for your project?

like image 654
ire_and_curses Avatar asked Jul 27 '09 16:07

ire_and_curses


People also ask

What makes code difficult for unit testing?

Tightly coupled code is extremely hard to unit test (and probably shouldn't be unit tested - it should be re-factored first), because by definition unit tests can only test a specific unit of something. All calls to databases or other components of the system should be avoided from Unit Tests because they violate this.

Why are unit tests useless?

All the unit tests are suddenly rendered useless. Some test code may be reused but all in all the entire test suite has to be rewritten. This means that unit tests increase maintenance liabilities because they are less resilient against code changes. Coupling between modules and their tests is introduced!

Should all code have unit tests?

Developers typically write lots of unit tests to cover different cases and aspects of the application's behavior, so it should be easy to code all of those test routines without enormous effort. Readable. The intent of a unit test should be clear.


6 Answers

With extensive unit testing you will start to find that refactoring operations are more expensive for exactly the reasons you said.

IMHO this is a good thing. Expensive and big changes to an API should have a bigger cost relative to small and cheap changes. Refactoring is not a free operation and it's important to understand the impact to both yourself and consumers of your API. Unit Tests are great ruler for measuring how expensive an API change will be to consume.

Part of this problem though is relieved by tooling. Most IDEs directly or indirectly (via plugins) support refactoring operations in their code base. Using these operations to change your unit tests will relieve a bit of the pain.

like image 141
JaredPar Avatar answered Oct 08 '22 06:10

JaredPar


Are there any 'best practices' which can be applied to the design and implementation of unit tests?

Make sure your unit tests haven't become integration tests. For example if you have unit tests for a class Foo, then ideally the tests can only break if

  1. there was a change in Foo
  2. or there was a change in the interfaces used by Foo
  3. or there was a change in the domain model (typically you'll have some classes like "Customer" which are central to the problem space, have no room for abstraction and are therefore not hidden behind an interface)

If your tests are failing because of any other changes, then they have become integration tests and you'll get in trouble as the system grows bigger. Unit tests should have no such scalability issues because they test an isolated unit of code.

like image 21
Wim Coenen Avatar answered Oct 08 '22 05:10

Wim Coenen


One of the projects I worked on was heavily unit-tested; we had over 1000 unit tests for 20 or so classes. There was slightly more test code than production code. The unit tests caught innumerable errors introduced during refactoring operations; they definitely made it easy and safe to make changes, extend functionality etc. The released code had a very low bug rate.

To encourage ourselves to write the unit tests, we specifically chose to keep them 'quick and dirty' - we would bash out a test as we produced the project code, and the tests were boring and 'not real code', so as soon as we wrote one that exercised the functionality of the production code, we were done, and moved on. The only criteria for the test code was that it fully exercised the API of the production code.

What we learnt the hard way is that this approach does not scale. As the code evolved, we saw a need to change the communication pattern between our objects, and suddenly I had 600 failing unit tests! Fixing this took me several days. This level of test breakage happened two or three times with further major architecture refactorings. In each case I don't believe we could reasonably have foreseen the code evolution that was required beforehand.

The moral of the story for me was this: unit-testing code needs to be just as clean as production code. You simply can't get away with cuttting and pasting in unit tests. You need to apply sensible refactoring, and decouple your tests from the production code where possible by using proxy objects.

Of course all of this adds some complexity and cost to your unit tests (and can introduce bugs to your tests!), so it's a fine balance. But I do believe that the concept of 'unit tests', taken in isolation, is not the clear and unambiguous win it's often made out to be. My experience is that unit tests, like everything else in programming, require care, and are not a methodology that can be applied blindly. It's therefore surprising to me that I've not seen more discussion of this topic on forums like this one and in the literature.

like image 45
ire_and_curses Avatar answered Oct 08 '22 06:10

ire_and_curses


Mostly in cases where the system was developed without unit testing in mind, it was an afterthought and not a design tool. When you develop with automated tests the chances of breaking your API diminishes.

like image 25
Otávio Décio Avatar answered Oct 08 '22 06:10

Otávio Décio


An excess of false positives can slow development down, so it's important to test for what you actually want to remain invariant. This usually means writing unit tests in advance for requirements, then following up with more detailed unit tests to detect unexpected shifts in output.

like image 31
Steven Sudit Avatar answered Oct 08 '22 05:10

Steven Sudit


I think you're looking at fixing a symptom, rather than recognizing the whole of the problem. The root problem is that a true API is a published interface*, and it should be subject to the same bounds that you would place on any programming contract: no changes! You can add to an API, and call it API v2, but you can't go back and change API v1.0, otherwise you have indeed broken backward compatibility, which is almost always a bad thing for an API to do.

(* I don't mean to call out any specific interfacing technology or language, interface can mean anything from the class declarations on up.)

I would suggest that a Test Driven Development approach would help prevent many of these kinds of problems in the first place. With TDD you would be "feeling" the awkwardness of the interfaces while you were writing the tests, and you would be compelled to fix those interfaces earlier in the process rather than waiting until after you've written a thousand tests.

One of the primary benefits of Test Driven Development is that it gives you instant feedback on the programmatic use of your class/interface. The act of writing a test is a test of your design, while the act of running the test is the test of your behavior. If it's difficult or awkward to write a test for a particular method, then that method is likely to be used incorrectly, meaning it's a weak design and it should be refactored quickly.

like image 29
John Deters Avatar answered Oct 08 '22 04:10

John Deters