Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Testing without relying on implementation details

Imagine the following contrived example:

public class LoginController {

    private readonly IValidate _validator;
    private readonly IAuthenticate _authenticator;

    public LoginController(IValidate validator, IAuthenticate authenticator) {
        _validator = validator;
        _authenticator = authenticator;
    }

    public HttpStatusCode Login(LoginRequest request) {
        if (!_validator.IsValid(request)) {
            return HttpStatusCode.BadRequest;
        }

        if (!_authenticator.IsAuthenticated(request.Email, request.Password)) {
            return HttpStatusCode.Unauthorized;
        }

        return HttpStatusCode.OK;
    }
}

public class LoginRequest {
    public string Email {get; set;}
    public string Password {get; set;}
}

public interface IValidate {
    bool IsValid(LoginRequest request);
}

public interface IAuthenticate {
    bool IsAuthenticated(string email, string password);
}

Typically I would write tests like the following:

[TestFixture]
public class InvalidRequest
{
    private LoginRequest _invalidRequest;
    private IValidate _validator;
    private HttpStatusCode _response;

    void GivenARequest()
    {
        _invalidRequest = new LoginRequest();
    }

    void AndGivenThatRequestIsInvalid() {
        _validator = Substitute.For<IValidate>();
        _validator.IsValid(_invalidRequest).Returns(false);
    }

    void WhenAttemptingLogin()
    {
        _response = new LoginController(_validator, null)
                                .Login(_invalidRequest);
    }

    void ThenShouldRespondWithBadRequest()
    {
        Assert.AreEqual(HttpStatusCode.BadRequest, _response);
    }

    [Test]
    public void Execute()
    {
        this.BDDfy();
    }
}

public class LoginUnsuccessful
{
    private LoginRequest _request;
    private IValidate _validator;
    private IAuthenticate _authenticate;
    private HttpStatusCode _response;

    void GivenARequest()
    {
        _request = new LoginRequest();
    }

    void AndGivenThatRequestIsValid() {
        _validator = Substitute.For<IValidate>();
        _validator.IsValid(_request).Returns(true);
    }

    void ButGivenTheLoginCredentialsDoNotExist() {
        _authenticate = Substitute.For<IAuthenticate>();
        _authenticate.IsAuthenticated(
            _request.Email,
            _request.Password
        ).Returns(false);
    }   

    void WhenAttemptingLogin()
    {
        _response = new LoginController(_validator, _authenticate)
                                .Login(_request);
    }

    void ThenShouldRespondWithUnauthorized()
    {
        Assert.AreEqual(HttpStatusCode.Unauthorized, _response);
    }

    [Test]
    public void Execute()
    {
        this.BDDfy();
    }
}

However after watching the following video Ian Cooper: TDD, where did it all go wrong and doing some more reading, I'm starting to think that my tests are too closely tied to the implementation of the code. For instance, the behavior I'm trying to test in the first instance is that if we try to login with an invalid request we response with the http status code of bad request. The issue is that I'm testing this by stubbing the IValidate dependency. If the implementer decides the IValidate abstraction is no longer useful and decides to validate the request inline in the Login method, then the behaviour of the system hasn't changed however my tests now break.

But then, the only other alternate is an integration test where I launch the web server and hit the login endpoint and assert on the response. The issue is that this is brittle and complicated, as we would ultimately need to have a valid user in the third party credential store for testing the user login successful scenario.

So my question is, is my understanding incorrect, or is there a middle ground between testing against the implementation and full-blown integration testing?

like image 786
kimsagro Avatar asked Sep 07 '14 01:09

kimsagro


2 Answers

Like most other aspects of our trade, there are trade-offs involved.

  • If you test at the unit level, some tests may be too brittle.
  • If you test at the behavioural level, you can't cover all cases.

Lots of people have declared unit testing and Test-Driven Development (TDD) dead, and see Behaviour-Driven Development (BDD) as the new silver bullet. Obviously, neither of them are silver bullets.

In your question, you've already outlined one type of problem with unit tests, so although I'd like to get back to those, let's start by looking at BDD.

The problem with Integration Tests

In his seminal talk Integration Tests Are a Scam, J.B. Rainsberger explains why Integration Tests (including most BDD-style tests) are problematic. You really should see the recording, but the essence of it is that Integration Testing involves a combinatorial explosion of test cases.

Consider your own trivial example. The Login method of the LoginController has a Cyclomatic Complexity of 3, since there are 3 ways through it. If you want to test only the behaviour, you'll need to integrate it with the appropriate implementation of its dependencies.

Just by looking at the method signatures, we can see that since both _validator.IsValid and _authenticator.IsAuthenticated return bool, there must be at least 2 ways through each of them.

Thus, with these optimistic numbers, the upper bound on the number of permutations of integrating these three objects is 3 * 2 * 2 = 12. The actual number is less than that, because you are returning early in some branches, but the order of magnitude is about right. The problem is that if e.g. the validator has a higher degree of complexity, and particularly if it has dependencies of its own, the number of possible combinations explode, and quickly reach five- or six-digit numbers.

There's no way you can write all those test cases.

The problem with unit tests

When you write unit tests, you can keep the number of combinations down. Instead of having to multiply all possible combinations of code paths, you can add them together in order to get an idea about the number of test cases you have to write. This enables you to keep the number of tests down, and you can get better coverage. In fact, you can get perfect coverage with unit tests.

The problem, then, is exactly as you describe. In a sense, you test what feels like the implementation. It is, but it's only part of the implementation, and that's the whole point. Still, it means that when things change, unit tests are affected, which Integration Tests should be to a far lesser degree.

Adopting an Append-Only strategy for tests help a bit, but it still can feel like overhead.

The test pyramid

All of this explains why Mike Cohn recommends the Test Pyramid:

  • Lots of unit unit tests to ensure that you're correctly building the thing.
  • Integration tests to ensure that you're building the correct thing.
like image 178
Mark Seemann Avatar answered Sep 19 '22 05:09

Mark Seemann


I follow the BDD approach of test-driving the system initially with acceptance tests (which are integration tests) and unit-testing where necessary to drive details. The acceptance tests are independent of implementation, because they interact with the system only through the user interface. Unit tests are necessarily implementation-dependent, since each tests a single class (your example is really a unit test of the controller), but you only have to write them when your acceptance tests don't cover all of the behavior, so at least some of the time you avoid tests that are tightly coupled to implementation.

I've specifically found that in well-factored web apps acceptance tests often cover controllers almost entirely and there is little need to unit-test controllers. Models and other classes that controllers delegate to need plenty of unit tests, but those classes tend to have more meaningful behavior and unit-testing them is more productive.

That just leaves what to do about your external credential store. If there's no way to write acceptance tests against the real store (you don't have a test instance of the store or a test account in the production instance), be practical and stub that. Ensure that you're integration-testing as much of your code as possible by putting the code that actually contacts the store in its own class, free of business logic, and stubbing only that class. You may be able to write a unit test or two for the store adapter class which tests that the connection to the store works.

like image 31
Dave Schweisguth Avatar answered Sep 22 '22 05:09

Dave Schweisguth