Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What can be alternative metrics to code coverage?

Code coverage is propably the most controversial code metric. Some say, you have to reach 80% code coverage, other say, it's superficial and does not say anything about your testing quality. (See Jon Limjap's good answer on "What is a reasonable code coverage % for unit tests (and why)?".)

People tend to measure everything. They need comparisons, benchmarks etc.
Project teams need a pointer, how good their testing is.

So what are alternatives to code coverage? What would be a good metric that says more than "I touched this line of code"?
Are there real alternatives?

like image 995
guerda Avatar asked Jun 26 '09 07:06

guerda


People also ask

What is used as measure of code coverage?

To calculate the code coverage percentage, simply use the following formula: Code Coverage Percentage = (Number of lines of code executed by a testing algorithm/Total number of lines of code in a system component) * 100.

Which tool should be used to monitor code coverage?

Coverage.py is a code coverage tool for Python. It monitors your Python programs, notes which parts of the code have been executed, and analyzes the source to identify code that could have been executed but was not.

What is the best code coverage tool?

#1) Parasoft JTest Its report provides a good picture of code covered and thereby minimizes risks. Key Features: It is used for Java-based applications. It is a multi-tasking tool which includes Data flow analysis, Unit testing, Static analysis, runtime error detection, code coverage testing etc.


2 Answers

If you are looking for some useful metrics that tell you about the quality (or lack there of) of your code, you should look into the following metrics:

  1. Cyclomatic Complexity
    • This is a measure of how complex a method is.
    • Usually 10 and lower is good, 11-25 is poor, higher is terrible.
  2. Nesting Depth
    • This is a measure of how many nested scopes are in a method.
    • Usually 4 and lower is good, 5-8 is poor, higher is terrible.
  3. Relational Cohesion
    • This is a measure of how well related the types in a package or assembly are.
    • Relational cohesion is somewhat of a relative metric, but useful none the less.
    • Acceptable levels depends on the formula. Given the following:
      • R: number of relationships in package/assembly
      • N: number of types in package/assembly
      • H: Cohesion of relationship between types
    • Formula: H = (R+1)/N
    • Given the above formula, acceptable range is 1.5 - 4.0
  4. Lack of Cohesion of Methods (LCOM)
    • This is a measure of how cohesive a class is.
    • Cohesion of a class is a measure of how many fields each method references.
    • Good indication of whether your class meets the Principal of Single Responsibility.
    • Formula: LCOM = 1 - (sum(MF)/M*F)
      • M: number of methods in class
      • F: number of instance fields in class
      • MF: number of methods in class accessing a particular instance field
      • sum(MF): the sum of MF over all instance fields
    • A class that is totally cohesive will have an LCOM of 0.
    • A class that is completely non-cohesive will have an LCOM of 1.
    • The closer to 0 you approach, the more cohesive, and maintainable, your class.

These are just some of the key metrics that NDepend, a .NET metrics and dependency mapping utility, can provide for you. I recently did a lot of work with code metrics, and these 4 metrics are the core key metrics that we have found to be most useful. NDepend offers several other useful metrics, however, including Efferent & Afferent coupling and Abstractness & Instability, which combined provide a good measure of how maintainable your code will be (and whether or not your in what NDepend calls the Zone of Pain or the Zone of Uselessness.)

Even if you are not working with the .NET platform, I recommend taking a look at the NDepend metrics page. There is a lot of useful information there that you might be able to use to calculate these metrics on whatever platform you develop on.

like image 93
jrista Avatar answered Jan 04 '23 08:01

jrista


Crap4j is one fairly good metrics that I'm aware of...

Its a Java implementation of the Change Risk Analysis and Predictions software metric which combines cyclomatic complexity and code coverage from automated tests.

like image 32
mezoid Avatar answered Jan 04 '23 10:01

mezoid