Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Reusable mocks vs mocking in each test

Our team is in the process of easing into TDD and struggling with best practices for unit tests. Our code under test uses dependency injection. Our tests generally follow the Arrange-Act-Assert kind of layout where we mock dependencies in the Arrange section with Moq.

Theoretically, unit tests should be a shield that protects you when you refactor. But it's turning into an anchor that prevents us from doing so. I'm trying to nail down where our process failure is.

Consider the simplified example:

  • XRepository.Save has it's signature and behavior/contract changed.
  • XController.Save uses XRepository.Save so it is refactored to use the new interface. But externally it's public contract has not changed.

I would expect that controller tests do not need to be refactored, but instead prove to me that my new controller implementation honors the unchanged contract. But we have failed here as this is not the case.

Each controller test mocks the repository interface on the fly. They all need to be changed. Furthermore, since each test does not want to mock all interfaces and methods, we find our test tied to the particular implementation because it needs to know what methods to mock.

It becomes exponentially more difficult to refactor for the more tests we have! Or more accurately, the more times that we mock an interface.

So my questions:

  1. Any preference for using on-the-fly mocks in each test vs making a reusable hand-crafted mock for each interface?

  2. Given my story, am I missing some principle or falling into a common pitfall?

Thanks!

like image 990
Craig Celeste Avatar asked Jan 07 '11 17:01

Craig Celeste


2 Answers

Your not missing any principle, but it is a common problem. I think each team solves it (or not) in their own way.

Side Effects

You will continue to have this issue with any function which has side effects. I have found for side effect functions I have to make tests that assure some or all of the following:

  • That it was/was not called
  • The number of times it was called
  • What arguments were passed to it
  • Order of calls

Assuring this in test usually means violating encapsulation (I interact and know with the implementation). Anytime you do this, you will always implicitly couple the test to the implementation. This will cause you to have to update the test whenever you update the implementation portions that you are exposing/testing.

Reusable Mocks

I've used reusable mocks to great effect. The trade-off is that their implementation is more complex because it needs to be more complete. You do mitigate the cost of updating tests to accommodate refactors.

Acceptance TDD

Another option is to change what you're testing for. Since this is really about changing your testing strategy it is not something to enter into lightly. You may want to do a little analysis first and see if it would really be fit for your situation.

I used to do TDD with unit tests. I ran into a issues that I felt we shouldn't have had to deal with. Specifically around refactors I noticed we usually had to update many tests. These refactors were not within a unit of code, but rather the restructuring of major components. I know many people will say the problem was the frequent large changes, not the unit testing. There is probably some truth to the large changes being partially a result of our planning/architecture. However, it was also do to business decisions that caused changes in directions. These and other legitimate causes had the effect of necessitating large changes to the code. The end result was large refactors becoming more slow and painful as a result of all the test updates.

We also ran into bugs due to integration issues that unit tests did not cover. We did some by manual acceptance testing. We actually did quite a bit of work to make the acceptance tests as low touch as possible. They were still manual, and we felt like there was so much cross over between the unit tests and acceptance test that there should be a way to mitigate the cost of implementing both.

Then the company had layoffs. All of a sudden we didn't have the same amount of resources to throw at programming and maintenance. We were pushed to get the biggest return for everything we did including testing. We started by adding what we called partial stack tests to cover common integration problems we had. They turned out to be so effective that we started doing less classic unit testing. We also got rid of the manual acceptance tests (Selenium). We slowly pushed up where the tests started testing until we were essentially doing acceptance tests, but without the browser. We would simulate a GET, POST or PUT method to a particular controller and check the acceptance criteria.

  • The database was updated correctly
  • The correct HTTP status code was returned
  • A page was returned that:
    • was valid html 4.01 strict
    • contained the the information we wanted to send back to the user

We ended having less bugs. Specifically almost all the integration bugs, and bugs due to large refactors disappeared almost completely.

There were trade-offs. It just turned out the pros far outweighed the cons for out situation. Cons:

  • The test were usually more complicated, and almost everyone tests some side effects.
  • We can tell when something breaks, but it's not as targeted as the unit tests so we do have to do more debugging to track down where the problem is.
like image 187
dietbuddha Avatar answered Nov 16 '22 03:11

dietbuddha


I've struggled with this kind of issue myself and don't have an answer that I feel is solid, but here a tentative way of thinking. I observe two kinds of Unit tests

  1. There are tests where exercise the public Interface, these are very important if we are to refactor with confidence, they prove that we honour our contract to our clients. These tests are best served by a hand-crafted reusable mock which deals with a small subset of test data.
  2. There are "coverage" tests. These tend to be to prove that our implementation behaves correctly when dependencies misbehave. These I think need on the fly mocks to provoke particular implementation paths.
like image 29
djna Avatar answered Nov 16 '22 03:11

djna