Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Integration Testing best practices

Tags:

Our team has hundreds of integration tests that hit a database and verify results. I've got two base classes for all the integration tests, one for retrieve-only tests and one for create/update/delete tests. The retrieve-only base class regenerates the database during the TestFixtureSetup so it only executes once per test class. The CUD base class regenerates the database before each test. Each repository class has its own corresponding test class.

As you can imagine, this whole thing takes quite some time (approaching 7-8 minutes to run and growing quickly). Having this run as part of our CI (CruiseControl.Net) is not a problem, but running locally takes a long time and really prohibits running them before committing code.

My question is are there any best practices to help speed up the execution of these types of integration tests?

I'm unable to execute them in-memory (a la sqlite) because we use some database specific functionality (computed columns, etc.) that aren't supported in sqlite.

Also, the whole team has to be able to execute them, so running them on a local instance of SQL Server Express or something could be error prone unless the connection strings are all the same for those instances.

How are you accomplishing this in your shop and what works well?

Thanks!

like image 806
Chris Conway Avatar asked Aug 25 '09 14:08

Chris Conway


People also ask

Which is the most popular integration testing approach?

Big Bang Method. In this approach, testing is done via integration of all modules at once. It is convenient for small software systems, if used for large software systems identification of defects is difficult.

What is the best time to perform integration testing?

Usually, integration testing is done after unit testing to ensure all the units work in harmony with each other. It is also done when support libraries are used along with the code.

What are the steps in integration testing?

Design test cases, test scenarios and test scripts accordingly. Deploy the chosen modules together and get the integration tests running. Track the defects and record the test results of tests. Repeat the above steps until the complete system is tested.


2 Answers

Keep your fast (unit) and slow (integration) tests separate, so that you can run them separately. Use whatever method for grouping/categorizing the tests is provided by your testing framework. If the testing framework does not support grouping the tests, move the integration tests into a separate module that has only integration tests.

The fast tests should take only some seconds to run all of them and should have high code coverage. These kind of tests allow the developers to refactor ruthlessly, because they can do a small change and run all the tests and be very confident that the change did not break anything.

The slow tests can take many minutes to run and they will make sure that the individual components work together right. When the developers do changes that might possibly break something which is tested by the integration tests but not the unit tests, they should run those integration tests before committing. Otherwise, the slow tests are run by the CI server.

like image 193
Esko Luontola Avatar answered Oct 15 '22 10:10

Esko Luontola


in NUnit you can decorate your test classes (or methods) with an attribute eg:

[Category("Integration")]
public class SomeTestFixture{
    ...
}
[Category("Unit")]
public class SomeOtherTestFixture{
    ...
}

You can then stipulate in the build process on the server that all categories get run and just require that your developers run a subset of the available test categories. What categories they are required to run would depend on things you will understand better than I will. But the gist is that they are able to test at the unit level and the server handles the integration tests.

like image 30
grenade Avatar answered Oct 15 '22 08:10

grenade