Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

In the TDD how do you write tests for code that inherently have side effects?

If a function's side effects are inherent within the design how do I develop such a function?

For instance if I wanted to implement a function like http.get( "url" ), and I stubbed the side effects as a service with dependency injection it would look like:

var http = {
  "get": function( url, service ) {
    return promise(function( resolve ) {
      service( url ).then(function( Response ) {
        resolve( Response );
      });
    });
  }
}

...but I would then need to implement the service which is identical to the original http.get(url) and therefore would have the same side effects and therefore put me in a development loop. Do I have to mock a server to test such a function and if so what part of the TDD development cycle does that fall under? Is it integration testing, or is it still unit testing?

Another example would be a model for a database. If I'm developing code that works with a database, I'll design an interface, abstract a model implementing that interface, and pass it into my code using dependency injection. As long as my model implements the interface I can use any database and easily stub it's state and responses to implement TDD for other functions which interact with a database. What about that model though? It's going to interact with a database- it seems like that side effect is inherent within the design, and abstracting it away puts me into a development loop when I go to implement that abstraction. How do I implement the model's methods without being able to abstract them away?

like image 483
ogginger Avatar asked Mar 22 '18 00:03

ogginger


2 Answers

In the TDD how do you write tests for code that inherently have side effects?

I don't think I've seen a particularly clear answer for this anywhere; the closest is probably GOOS -- the "London" school of TDD tends to be focused outside in.

But broadly, you need to have a sense that side effects belong in the imperative shell. They are usually implemented within an infrastructure component. So you'll typically want a higher level abstraction that you can pass to the functional part of your system.

For example, reading the system clock is a side effect, producing a time since epoch value. Most of your system shouldn't care where the time comes from, so the abstraction of reading the clock should be an input to the system.

Now, it can feel like "turtles all the way down" -- how do you test your interaction with the infrastructure? Kent Beck describes a stopping condition

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence....

I tend to lean on Hoare's observation

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies

Once you get down to an implementation of a side effect that is obviously correct, you stop worrying about it.

When you are staring at a side effect, and the implementation is not obviously correct, you start looking for ways to pull the hard part back into the functional core, further isolating the side effect.

The actual testing of the side effects typically happens when you start wiring all of the components together. Because of the side effects, these tests are typically slower; because they share mutable state, you often need to ensure that they are running sequentially.

like image 135
VoiceOfUnreason Avatar answered Jan 01 '23 10:01

VoiceOfUnreason


If you are writing unit test on a module like that, focus on that module itself, not on the dependency. For example, how is it supposed to react to a db/service being down, or throwing exception/error, returning null data, returning good data, etc. That's why you mock them and return different values or set different behavior like throwing exception.

like image 42
AD.Net Avatar answered Jan 01 '23 08:01

AD.Net