Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it a bad idea to create tests that rely on each other within a test fixture?

For example:

// NUnit-like pseudo code (within a TestFixture)

Ctor()
{
 m_globalVar = getFoo();
}

[Test]
Create()
{
 a(m_globalVar)
}

[Test]
Delete()
{
 // depends on Create being run
 b(m_globalVar)
}

… or…

// NUnit-like pseudo code (within a TestFixture)

[Test]
CreateAndDelete()
{
 Foo foo = getFoo(); 
 a(foo);

 // depends on Create being run
 b(foo);
}

… I’m going with the later, and assuming that the answer to my question is:

No, at least not with NUnit, because according to the NUnit manual:

The constructor should not have any side effects, since NUnit may construct the class multiple times in the course of a session.

... also, can I assume it's bad practice in general? Since tests can usually be run separately. So the result of Create may never be cleaned up by Delete.

like image 660
Nick Bolton Avatar asked Jun 08 '10 12:06

Nick Bolton


People also ask

Should tests be dependent on each other?

Generally no. Any given test should setup, execute, and tear down in an isolated way. If your tests depend on each other then you won't be able to execute individual tests. It's also rarely (if ever) guaranteed that any given test framework is always going to execute tests in the same order, and synchronously.

Should tests be independent?

Make sure tests run independent and do not rely on external factors in order to succeed. In doing so, tests can be executed in isolation (for example with a debugger) as well as in sets. You're also relying less on the execution engine, which can be considered a good thing.

Why should the test be used only once?

You don't cheat by using the same test set over and over. Then it will introduce bias in your model, as it will also try to learn the information in test set.

Which decorator is used for tests using multiple fixtures?

Parametrized tests The above decorator is a very powerful functionality, it permits to call a test function multiple times, changing the parameters input at each iteration.


2 Answers

Yes, it is bad practice. In all unit test frameworks I know, the execution order of test methods is not guaranteed, thus writing tests which depend on the execution order is explicitly discouraged.

As you also noted, if test B depends on the (side) effects of test A, either test A contains some common initialization code (which then should be moved into a common setup method instead), or the two tests are part of the same story, so they could be united (IMHO - some people stick to having a single assert per test method, so they would disagree with me on this), or test B should otherwise be made totally independent of test A regarding fixture setup.

like image 197
Péter Török Avatar answered Nov 15 '22 22:11

Péter Török


Definately a bad idea. Unit tests should be lightweight, stateless, and have no dependencies on things such as file system, registry, etc.. This allows them to run quickly and to be less brittle.

If your tests require executing in a certain order, then you can't ever be sure (at least without investigation) whether a test has failed because of execution order or a problem with the code!

This will ultimately lead to a lack of confidence developing regarding your test suite and eventual abandonment.

like image 27
Ben Cawley Avatar answered Nov 16 '22 00:11

Ben Cawley