If a project has 100% unit test coverage, are integration tests still needed?
I have never worked on a project with 100% unit test coverage, but I'm wondering if your project obtains this (or in the 90%), was your experience that you still needed integration tests? (did you need less?)
I ask because integration tests seem to suck. They are often slow, fragile (break easily), opaque (when broken someone has to dive through all the layers to find out what is wrong) and are causing our project to slow way down... I'm beginning to think that having only unit tests (and perhaps a small handful of smoke tests) is the way to go.
In the long run, it seems like integration tests (in my experience) cost more than they save.
Thanks for your consideration.
No. In fact, it's usually a mistake that leads to poor testing. You need to cover things that actually matter thoroughly. Often trying to get 100 percent causes you to warp your test development to reach lines in the program that actually probably should not be there.
The short answer is yes. For software to work properly, all units should integrate and perform as they're expected to. To ensure this is the case, you will need to perform integration tests.
Generally speaking, unit tests are cheaper. They're easier to write—unless you're trying to add them to an existing app—which means developers don't spend quite as much time writing them. They're also cheaper to run: they don't usually require you to do special things for your environment or obtain external resources.
You may need to include Integration Tests in Code Coverage if you need to measure performance and prepare a full stack report. You can also measure code coverage on Integrated Code scripts.
I think it's important to define your terms before having this discussion.
Unit test tests a single unit in isolation. For me, that's a class. A unit test will create an object, invoke a method, and check a result. It answers the question "does my code do what I intended it to do?"
Integration test tests the combination of two components in the system. It is focused on the relationship between the components, not the components themselves. It answers the question "do these components work together as intended".
System test tests the whole software system. It answers the question "does this software work as intended?"
Acceptance test is an automated way for the customer answer the question "is this software what I think I want?". It is a kind of system test.
Note that none of these tests answer questions like "is this software useful?" or "is this software easy to use?".
All automated tests are limited by axiom "End-to-end is further than you think" - eventually a human has to sit down in front of a computer and look at your user interface.
Unit tests are faster and easier to write, faster to run, and easier to diagnose. They don't depend on "external" elements like a file system or a database, so they are much simpler/faster/reliable. Most unit tests continue to work as you refactor (and good unit tests are the only way to refactor safely). They absolutely require that your code be decoupled, which is hard, unless you write the test first. This combination of factors makes the Red/Green/Refactor sequence of TDD work so well.
System tests are hard to write, because they have to go through so much setup to get to a specific situation that you want to test. They are brittle, because any change in the behavior of the software before can affect the sequence leading up to the situation you want to test, even if that behavior isn't relevant to the test. They are dramatically slower than unit tests for similar reasons. Failures can be very difficult to diagnose, both because it can take a long time to get to the point of failure, and because so much software is involved in the failure. In some software, system tests are very difficult to automate.
Integration tests sit in between: they are easier to write, run, and diagnose than system tests, but with broader coverage than unit tests.
Use a combination of testing strategies to balance the costs and values of each.
Yes.
Even if all "units" do what they are supposed to do, it is no guarantee that the complete system works as designed.
Yes, besides there are a few different types of code coverage
from wiki:
Path coverage for example, just because every method has been called, doesn't mean that errors wont occur if you call various methods in a given order.
First, 100% unit test coverage is not enough even at unit testing level: you cover only 100% of the instructions of your code. What about paths in your code? What about input or output domains?
Second, you don't know whether output from a sender unit is compatible with input from its receiver unit. This is the purpose of integration testing.
Finally, unit testing may be performed on a different environment than production. Integration testing may reveal discrepancies.
You can only prove the presence of a bug using tests/coverage, but you can never prove that the code is bug-free using tests/coverage. This fact indicates the boundaries of testing/coverage. This is the same in mathematics, you can disprove a theorem by finding a counter example, but you can never prove a theorem by not finding a counter example. So testing and coverage are only a substitute for correctness proofs, which are so difficult to do that they are almost never used. Testing and coverage can improve quality of the code, but there is no guarantee. It remains a craft an not a science.
I've not really seen an answer that covers these considerations. Now, I'm speaking from a holistic systems perspective, not form a SW development perspective, but... Integration is basically the process of combining lower level products into a higher level product. Each level has its own set of requirements to comply with. Although it is possible that some requirements are the same, the overall requirements set will be different for different levels. This means that test objectives are different at different levels. Also, the environment of the environment of the higher level product tends to be different from that of the lower level product (e.g. SW module testing may occur on a desktop environment, whereas a complete loadable SW item may be tested when loaded in its HW component). Furthermore, lower level component developers may not have the same understanding of the `requirements and design as the higher level product developers, so integration testing also validates to a certain extend the lower level product development.
Unit tests are different from integration tests.
Just to make a point: if I have to choose, I would dump unit tests and go with integration tests. Experience tells that unit tests help to ensure functionality and also find bugs early in the development cycle.
Integration testing is done with product looking close to what it would look to end users. That is important too.
Unit tests are generally all about testing your class in isolation. They should be designed to ensure that given specific inputs your class exhibits predictable and expected behaviors.
Integration tests are generally all about testing your classes in combinations with each other and with "outside" programs using those classes. They should focus on ensuring that when the overall product uses your classes it is doing so in the correct manner.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With