A recent debate within my team made me wonder. The basic topic is that how much and what shall we cover with functional/integration tests (sure, they are not the same but the example is dummy where it doesn't matter).
Let's say you have a "controller" class something like:
public class SomeController {
@Autowired Validator val;
@Autowired DataAccess da;
@Autowired SomeTransformer tr;
@Autowired Calculator calc;
public boolean doCheck(Input input) {
if (val.validate(input)) {
return false;
}
List<Stuff> stuffs = da.loadStuffs(input);
if (stuffs.isEmpty()) {
return false;
}
BusinessStuff businessStuff = tr.transform(stuffs);
if (null == businessStuff) {
return false;
}
return calc.check(businessStuff);
}
}
We need a lot of unit testing for sure (e.g., if validation fails, or no data in DB, ...), that's out of question.
Our main issue and on what we cannot agree is that how much integration tests shall cover it :-)
I'm on the side that we shall aim for less integration tests (test pyramid). What I would cover from this is only a single happy-unhappy path where the execution returns from the last line, just to see if I put these stuff together it won't blow up.
The problem is that it is not that easy to tell why did the test result in false, and that makes some of the guys feeling uneasy about it (e.g., if we simply check only the return value, it is hidden that the test is green because someone changed the validation and it returns false). Sure, yeah, we can cover all cases but that would be a heavy overkill imho.
Does anyone has a good rule of thumb for this kind of issues? Or a recommendation? Reading? Talk? Blog post? Anything on the topic?
Thanks a lot in advance!
PS: Sry for the ugly example but it's quite hard to translate a specific code part to an example. Yeah, one can argue about throwing exceptions/using a different return type/etc. but our hand is more or less bound because of external dependencies.
Big bang testing It means that integration testing starts when code writing is finished and the first version of a product is ready for the release. It can be time-saving since a team doesn't pause the development to check every unit. However, test cases should be flawless.
The most common problem I see with integration testing is that most attempts at integration testing do not recognize that people “using” the system have different expectations of the system, and will use the “integrations” differently depending on those expectations and their business needs.
Integration test cases focus mainly on the interface between the modules, integrated links, data transfer between the modules as modules/components that are already unit tested i.e. the functionality and the other testing aspects have already been covered.
It's easy to figure out where the test should reside if you follow these rules:
Let's dive, but first let's agree on terminology:
If we build a balanced pyramid we'll end up with most tests on Unit and Component levels and few of them will be left to System Testing. This is good since lower-level tests are faster and easier. To do that:
Example: user's name cannot exceed 50 symbols, can have only latin as well as some special symbols.
Here is a more elaborate example of how you can implement a balanced pyramid.
In general we write an integration test at every starting point of the application (let's say every controller). We validate some happy flows and some error flows, with a couple of asserts to give us some peace of mind that we didn't break anything.
However, we also write tests at lower levels in response to regressions or when multiple classes are involved in a piece of complicated behaviour.
We use Integration tests mainly to catch the following types of regressions:
For problems with refactoring, a couple of IT tests that hit a good portion of your application is more than sufficient. Refactoring often hits a large portion of classes and so these tests will expose things like using the wrong class or parameter somewhere.
Injection problems often happen because of missing annotations or mistakes in XML config. The first integration test that runs and sets up the entire context (apart from mocking the back-ends) will catch these every time.
Sometimes you have code that is spread over several classes, needs to filtering, transformations, etc. and sometimes no one really understands what is going on. What's worse, it is nearly impossible to test on a live system because the underlying data sources cannot easily provide the exact scenario that will trigger a bug.
For these cases (once discovered) we add a new integration test, where we feed the system the input that caused the bug, and then verify if it is performing as expected. This gives a lot of peace of mind after extensive code changes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With