Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What are the differences between the three methods of code coverage analysis?

This sonar page basically lists the various methods employed by different code coverage analysis tools:

  1. Source code instrumentation(Used by Clover)
  2. Offline byte code instrumentation(Used by Cobertura)
  3. On-the-fly byte code instrumentation(Used by Jacoco)

What are these three methods and which one is the most efficient and why?If the answer to the question of efficiency is "it depends" , then please explain why?

like image 388
Geek Avatar asked Mar 06 '13 19:03

Geek


People also ask

What is coverage and what are the different types of coverage techniques?

In this article, we have discussed the four types of coverage – test coverage, statement coverage, decision coverage, and branch coverage. We have discussed each its working and coverage percentage. Mostly these coverage types are used to check the reliability and functionality of the test case, to check the outcomes.

How many types are there in code coverage analysis?

Following are the types of code coverage Analysis: Statement coverage and Block coverage. Function coverage. Function call coverage.

What are main differences between code coverage and test coverage?

Code coverage is measured by the percentage of code that is covered during testing, whereas test coverage is measured by the features that are covered via tests.


2 Answers

Source code instrumentation consists in adding instructions to the source code before compiling it. These instructions are used to trace which parts of the codes have been executed.

Offline byte-code instrumentation consists in adding those same instructions, but after the compilation, directly into the byte-code.

On-the-fly byte-code instrumentation consists in adding those same instructions in the byte-code, but dynamically, at runtime, when the byte-code is loaded by the JVM.

This page has a comparison between the methods. It might be biased, since it's part of the Clover documentation.

Depending on your definition of "efficient", choose the one you like the most. I don't think you'll get enormous differences. They all do the job, and the big picture will be the same whatever the method used.

like image 175
JB Nizet Avatar answered Oct 17 '22 04:10

JB Nizet


In general the effect on coverage is the same.

Source code instrumentation can give superior reporting results, simply because byte-code instrumentation cannot distinguish any structure within source lines, as the code block granularity is only recorded in terms of source lines.

Imagine I have two nested if statements (or equivalently, if (a && b) ... *) in a single line. A source code instrumenter can see these, and provide coverage information for the multiple arms within the if, within the source line; it can report blocks based on lines and columns. A byte code instrumenter only sees one line wrapped around the conditions. Does it report the line as "covered" if condition a executes, but is false?

You may argue this is a rare circumstance (and it probably is), and is therefore not very useful. When you get bogus coverage on it followed by a field failure, you may change your mind about utility.

There's a nice example and explanation of how byte code coverage makes getting coverage of switch statements right, extremely difficult.

A source code instrumenter may also achieve faster test executions, because it has the compiler helping optimize the instrumented code. In particular, a probe inserted inside a loop by a binary instrumenter may get compiled inside the loop by a JIT compiler. A good Java compiler will see the instrumentation produces a loop-invariant result, and lift the instrumentation out of the loop. (A JIT compiler can arguably do this too; the question is whether they actually do so).

like image 45
Ira Baxter Avatar answered Oct 17 '22 04:10

Ira Baxter