Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

<100% Test coverage - best practices in selecting test areas [closed]

Tags:

testing

Suppose you're working on a project and the time/money budget does not allow 100% coverage of all code/paths.

It then follows that some critical subset of your code needs to be tested. Clearly a 'gut-check' approach can be used to test the system, where intuition and manual analysis can produce some sort of test coverage that will be 'ok'.

However, I'm presuming that there are best practices/approaches/processes that identify critical elements up to some threshold and let you focus your test elements on those blocks.

For example, one popular process for identifying failures in manufacturing is Failure Mode and Effects Analysis. I'm looking for a process(es) to identify critical testing blocks in software.

like image 555
Paul Nathan Avatar asked Apr 09 '10 17:04

Paul Nathan


People also ask

How do you choose test coverage?

How to Calculate Test Coverage. Calculating test coverage is actually fairly easy. You can simply take the number of lines that are covered by a test (any kind of test, across your whole testing strategy) and divide by the total number of lines in your application.

What can we be sure about if we achieve 100 Statement coverage of a given code?

You can say you have achieved 100% statement coverage if your test cases executes every statement in the code at-least once.

What is a good testing coverage?

With that being said it is generally accepted that 80% coverage is a good goal to aim for. Trying to reach a higher coverage might turn out to be costly, while not necessary producing enough benefit. The first time you run your coverage tool you might find that you have a fairly low percentage of coverage.


4 Answers

100% code coverage is not a desirable goal. See this blog for some reasons.

My best practice is to derive test cases from use cases. Create concrete traceability (I use a UML tool but a spreadsheet will do as well) between the use cases your system is supposed to implement and test cases that proves that it works.

Explicitly identify the most critical use cases. Now look at the test cases they trace to. Do you have many test cases for the critical use cases? Do they cover all aspects of the use case? Do they cover negative and exception cases?

I have found that to be the best formula (and best use of the team's time) for ensuring good coverage.

EDIT:

Simple, contrived example of why 100% code coverage does not guarantee you test 100% of cases. Say CriticalProcess() is supposed to call AppendFile() to append text but instead calls WriteFile() to overwrite text.

[UnitTest]
Cover100Percent()
{
    CriticalProcess(true, false);
    Assert(FileContents("TestFile.txt") == "A is true");

    CriticalProcess(false, true);
    Assert(FileContents("TestFile.txt") == "B is true");

    // You could leave out this test, have 100% code coverage, and not know
    // the app is broken.
    CriticalProcess(true, true);
    Assert(FileContents("TestFile.txt") == "A is trueB is true");
}

void CriticalProcess(bool a, bool b)
{
    if (a)
    {
        WriteFile("TestFile.txt", "A is true");
    }

    if (b)
    {
        WriteFile("TestFile.txt", "B is true");
    }
}
like image 156
Eric J. Avatar answered Oct 23 '22 15:10

Eric J.


Unless you're doing greenfield development using TDD, you are unlikely to get (or want) 100% test coverage. Code coverage is more of a guideline, something to ask "what haven't I tested?"

You may want to look at other metrics, such as cyclomatic complexity. Find the complex areas of your code and test those (then refactor to simplify).

like image 45
TrueWill Avatar answered Oct 23 '22 14:10

TrueWill


There are 3 main components which you should be aware:

  • important features - you should know what is more critical. Ask yourself ""How screwed would I (or my customer) be if there's a bug in this component/code snippet?". Your customer could probably help you on determining these kind of priorities. Things that deal directly with money tend to follow in this case
  • frequently used features - The most common use cases should be as bug-free as possible. Nobody cares if there's a bug in a part of the system no one uses
  • most complex features - The developers usually have a good idea of which parts of the code are more likely to contain bugs. You should give special attention to those.

If you have this info, then it probably won't be hard choosing how to distribute your testing resources.

like image 39
Samuel Carrijo Avatar answered Oct 23 '22 14:10

Samuel Carrijo


False sense of security: You should be always aware of the fact that test coverage can mislead to false sense of security. A great article about this fact can be found in the disco blog. That said relying on the information of "green" indicators allows you to miss untested paths.

Good Indicator for untested paths: On the other hand missing test coverage most times displayed in red always is a great indicator for paths that are not covered. You might check these first because they are easy to spot and allow you to evaluate whether you want to add test coverage here or not.

Code centric approach to identify critical elements: There is a great tooling support availible to help you find the mess and possible gotchas in your code. You might want to have a look at the IntelliJ IDE and its code analysis features or for example at Findbugs, Checkstyle and PMD. A great tool that combines these static code analyzing tools that is available for free is Sonar.

Feature centric approch to identify critical elements: Evaluate your software and break it down into features. Ask yourself questions like: "What features are most important and should be most reliable? Where do we have to take care of the correctness of results? Where would a bug or failure be most destructive to the software?"

like image 27
Liuh Avatar answered Oct 23 '22 16:10

Liuh