Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Integration tests, but how much? [closed]

A recent debate within my team made me wonder. The basic topic is that how much and what shall we cover with functional/integration tests (sure, they are not the same but the example is dummy where it doesn't matter).

Let's say you have a "controller" class something like:

public class SomeController {
    @Autowired Validator val;
    @Autowired DataAccess da;
    @Autowired SomeTransformer tr;
    @Autowired Calculator calc;

    public boolean doCheck(Input input) {
        if (val.validate(input)) {
             return false;
        }

        List<Stuff> stuffs = da.loadStuffs(input);
        if (stuffs.isEmpty()) {
             return false;
        }

        BusinessStuff businessStuff = tr.transform(stuffs);
        if (null == businessStuff) {
            return false;
        }

       return calc.check(businessStuff);
    }
}

We need a lot of unit testing for sure (e.g., if validation fails, or no data in DB, ...), that's out of question.

Our main issue and on what we cannot agree is that how much integration tests shall cover it :-)

I'm on the side that we shall aim for less integration tests (test pyramid). What I would cover from this is only a single happy-unhappy path where the execution returns from the last line, just to see if I put these stuff together it won't blow up.

The problem is that it is not that easy to tell why did the test result in false, and that makes some of the guys feeling uneasy about it (e.g., if we simply check only the return value, it is hidden that the test is green because someone changed the validation and it returns false). Sure, yeah, we can cover all cases but that would be a heavy overkill imho.

Does anyone has a good rule of thumb for this kind of issues? Or a recommendation? Reading? Talk? Blog post? Anything on the topic?

Thanks a lot in advance!

PS: Sry for the ugly example but it's quite hard to translate a specific code part to an example. Yeah, one can argue about throwing exceptions/using a different return type/etc. but our hand is more or less bound because of external dependencies.

like image 766
rlegendi Avatar asked Feb 27 '17 15:02

rlegendi


People also ask

When can you say that integration testing is complete?

Big bang testing It means that integration testing starts when code writing is finished and the first version of a product is ready for the release. It can be time-saving since a team doesn't pause the development to check every unit. However, test cases should be flawless.

What is the major problem during integration testing?

The most common problem I see with integration testing is that most attempts at integration testing do not recognize that people “using” the system have different expectations of the system, and will use the “integrations” differently depending on those expectations and their business needs.

What should an integration test cover?

Integration test cases focus mainly on the interface between the modules, integrated links, data transfer between the modules as modules/components that are already unit tested i.e. the functionality and the other testing aspects have already been covered.


2 Answers

It's easy to figure out where the test should reside if you follow these rules:

  • We check the logic on Unit Tests level and we check if the logic is invoked on Component or System levels.
  • We don't use mocking frameworks (mockito, jmock, etc).

Let's dive, but first let's agree on terminology:

  • Unit tests - check a method, a class or a few of them in isolation
  • Component Test - initializes a piece of the app but doesn't deploy it to the App Server. Example could be - initializing Spring Contexts in the tests.
  • System Test - requires a full deployment on App Server. Example could be: sending HTTP REST requests to a remote server.

If we build a balanced pyramid we'll end up with most tests on Unit and Component levels and few of them will be left to System Testing. This is good since lower-level tests are faster and easier. To do that:

  • We should put the business logic as low as possible (preferably in Domain Model) as this will allow us to easily test it in isolation. Each time you go through a collection of objects and put conditions there - it ideally should go to the domain model.
  • But the fact that the logic works doesn't mean it's invoked correctly. That's where you'd need Component Tests. Initialize your Controllers as well as services and DAO and then call it once or two times to see whether the logic is invoked.

Example: user's name cannot exceed 50 symbols, can have only latin as well as some special symbols.

  • Unit Tests - create Users with the right and wrong usernames, check that exceptions are thrown or vice versa - the valid names are passing
  • Component Tests - check that when you pass a non-valid user to the Controller (if you use Spring MVC - you can do that with MockMVC) it throws the error. Here you'll need to pass only one user - all the rules have been already checked by now, here you're interested only in knowing if those rules are invoked.
  • System Tests - you may not need them for this scenario actually..

Here is a more elaborate example of how you can implement a balanced pyramid.

like image 145
Stanislav Bashkyrtsev Avatar answered Sep 21 '22 10:09

Stanislav Bashkyrtsev


In general we write an integration test at every starting point of the application (let's say every controller). We validate some happy flows and some error flows, with a couple of asserts to give us some peace of mind that we didn't break anything.

However, we also write tests at lower levels in response to regressions or when multiple classes are involved in a piece of complicated behaviour.

We use Integration tests mainly to catch the following types of regressions:

  1. Refactoring mistakes (not caught by Unit tests).

For problems with refactoring, a couple of IT tests that hit a good portion of your application is more than sufficient. Refactoring often hits a large portion of classes and so these tests will expose things like using the wrong class or parameter somewhere.

  1. Early detection of injection problems (context not loading, Spring)

Injection problems often happen because of missing annotations or mistakes in XML config. The first integration test that runs and sets up the entire context (apart from mocking the back-ends) will catch these every time.

  1. Bugs in super complicated logic that are nearly impossible to test without controlling all inputs

Sometimes you have code that is spread over several classes, needs to filtering, transformations, etc. and sometimes no one really understands what is going on. What's worse, it is nearly impossible to test on a live system because the underlying data sources cannot easily provide the exact scenario that will trigger a bug.

For these cases (once discovered) we add a new integration test, where we feed the system the input that caused the bug, and then verify if it is performing as expected. This gives a lot of peace of mind after extensive code changes.

like image 27
john16384 Avatar answered Sep 22 '22 10:09

john16384