Is there anyone that can clearly define these levels of testing as I find it difficult to differentiate when doing TDD or unit testing. Please if anyone can elaborate how, when to implement these?
There are four main stages of testing that need to be completed before a program can be cleared for use: unit testing, integration testing, system testing, and acceptance testing.
It is assumed, at this stage, that the unit works fine in isolation. Integration testing aims to ensure that, when units of functionality are integrating, no errors are introduced, whereas regression testing is done after every change to ensure that these changes did not break units that were already tested.
Briefly:
Unit testing - You unit test each individual piece of code. Think each file or class.
Integration testing - When putting several units together that interact you need to conduct Integration testing to make sure that integrating these units together has not introduced any errors.
Regression testing - after integrating (and maybe fixing) you should run your unit tests again. This is regression testing to ensure that further changes have not broken any units that were already tested. The unit testing you already did has produced the unit tests that can be run again and again for regression testing.
Acceptance tests - when a user/customer/business receive the functionality they (or your test department) will conduct Acceptance tests to ensure that the functionality meets their requirements.
You might also like to investigate white box and black box testing. There are also performance and load testing, and testing of the "'ilities" to consider.
Unit test: when it fails, it tells you what piece of your code needs to be fixed.
Integration test: when it fails, it tells you that the pieces of your application are not working together as expected.
Acceptance test: when it fails, it tells you that the application is not doing what the customer expects it to do.
Regression test: when it fails, it tells you that the application no longer behaves the way it used to.
Here's a simple explanation for each of the mentioned tests and when they are applicable:
Unit Test A unit test is performed on a self-contained unit (usually a class or method) and should be performed whenever a unit has been implemented or updating of a unit has been completed.
This means it's run whenever you've written a class/method, fixed a bug, changed functionality...
Integration Test Integration test aims to test how well several units interact with each other. This type of test should be performed Whenever a new form of communications has been established between units or the nature of their interaction have changed.
This means it's run whenever a recently written unit is integrated into the rest of the system or whenever a unit which is interacts with other systems has been updated (and successfully completed its unit tests).
Regression Test Regression tests are performed whenever anything has been changed in the system, in order to check that no new bugs have been introduced.
This means it's run after all patches, upgrades, bug fixes. Regression testing can be seen as a special case of combined unit test and integration test.
Acceptance Test Acceptance tests are performed whenever it is relevant to check that a subsystem (possibly the entire system) fulfils its entire specifications.
This means it's mainly run before finishing a new deliverable or announcing completion of a larger task. See this as your final check to see that you've really completed your goals before running to the client/boss and announcing victory.
This is at least the way I learned, though I'm sure there are other opposing views. Either way, I hope that helps.
Unit Test: is my single method working correctly? (NO dependencies, or dependencies mocked)
Integration Test: are my two separately developed modules working corectly when put together?
Regression Test: Did I break anything by changing/writing new code? (running unit/integration tests with every commit is technically (automated) regression testing). More often used in context of QA - manual or automated.
Acceptance Test: testing done by client, that he "accepts" the delivered SW
I'll try:
Can't comment (reputation to low :-| ) so...
@Andrejs makes a good point around differences between environments associated with each type of testing.
Unit tests are run typically on developers machine (and possibly during CI build) with mocked out dependencies to other resources/systems.
Integration tests by definition have to have (some degree) of availability of dependencies; the other resources and systems being called so the environment is more representative. Data for testing may be mocked or a small obfuscated subset of real production data.
UAT/Acceptance testing has to represent the real world experience to the QA and business teams accepting the software. So needs full integration and realistic data volumes and full masked/obfuscated data sets to deliver realistic performance and end user experience.
Other "ilities" are also likely to need the environment to be as close as possible to reality to simulate the production experience e.g. performance testing, security, ...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With