Recently a new concept of Theories was added to JUnit (since v4.4).
In a nutshell, you can mark your test method with @Theory
annotation (instead of @Test
), make your test method parametrized and declare an array of parameters, marked with @DataPoints
annotation somewhere in the same class.
JUnit will sequentially run your parametrized test method passing parameters retrieved from @DataPoints
one after another. But only until the first such invocation fails (due to any reason).
The concept seems to be very similar to @DataProviders
from TestNG, but when we use data providers, all the scenarios are run inspite of their execution results. And it's useful because you can see how many scenarious work/don't work and you can fix your program more effectively.
So, I wonder what's the reason not to execute @Theory
-marked method for every @DataPoint
? (It appears not so difficult to inherit from Theories runner and make a custom runner which will ignore failures but why don't we have such behaviour out of the box?)
UPD: I have created a fault-tolerant version of Theories runner and made it available for a public access: https://github.com/rgorodischer/fault-tolerant-theories
In order to compare it with the standard Theories runner run StandardTheoriesBehaviorDemo then FaultTolerantTheoriesBehaviorDemo which are placed under src/test/...
folder.
Reporting multiple failures in a single test is generally a sign that the test does too much, compared to what a unit test ought to do. Usually this means either that the test is really a functional/acceptance/customer test or, if it is a unit test, then it is too big a unit test.
JUnit is designed to work best with a number of small tests. It executes each test within a separate instance of the test class. It reports failure on each test. Shared setup code is most natural when sharing between tests. This is a design decision that permeates JUnit, and when you decide to report multiple failures per test, you begin to fight against JUnit. This is not recommended.
Long tests are a design smell and indicate the likelihood of a design problem. Kent Beck is fond of saying in this case that "there is an opportunity to learn something about your design." We would like to see a pattern language develop around these problems, but it has not yet been written down. Source: http://junit.sourceforge.net/doc/faq/faq.htm#tests_12
To ignore assertion failures you can also use a JUnit error collector rule:
The ErrorCollector rule allows execution of a test to continue after the first problem is found (for example, to collect all the incorrect rows in a table, and report them all at once)
For example you can write a test like this.
public static class UsesErrorCollectorTwice {
@Rule
public ErrorCollector collector= new ErrorCollector();
@Test
public void example() {
String x = [..]
collector.checkThat(x, not(containsString("a")));
collector.checkThat(y, containsString("b"));
}
}
The error collector uses hamcrest Matchers. Depending on your preferences this is positive or not.
AFAIK, the idea is the same as with asserts, the first failure stops the test. This is the difference between Parameterized & Theories.
Parameterized takes a set of data points and runs a set of test methods with each of them. Theories does the same, but fails when the first assert fails.
Try looking at Parameterized. Maybe it provides what you want.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With