I just started working with PHPUnit and its colorful code coverage reports. I understand all the numbers and percentages save one: The C.R.A.P index. Can anyone offer me a solid explanation of what it means, how to analyze it and how to lower it?
@Toader Mihai offered a solid explanation. (+1 from me)
Write less complex code OR write better tested code. (See the graph below)
Better tested Code ?
In this context this just means: A higher code coverage and usually results in writing more tests.
Less complex code ?
For example: Refactor your methods into smaller ones:
// Complex function doSomething() { if($a) { if($b) { } if($c) { } } else { if($b) { } if($c) { } } } // 3 less complex functions function doSomething() { if($a) { doA(); } else { doNotA(); } } function doA() { if($b) { } if($c) { } } function doNotA() { if($b) { } if($c) { } }
(just a trivial example, you'll find more resources for that i'm sure)
First off let me provide some additional resources:
Creators blog post about the crap index
just in case: Cyclomatic complexity explained. Tools like PHP_CodeSniffer and PHPMD will tell you that number in case you want to know.
And while it is for you to decide what number is "ok" one often suggested number (that is a litte high imho) is a crap index of 30 resulting in a Graph like this:
(You can get the .ods file here: https://www.dropbox.com/s/3bihb9thlp2fyg8/crap.ods?dl=1 )
Basically it wants to be a predictor of the risk of change for a method.
It has two factors in it:
cyclomatic complexity
) aka how many decisions paths exists in said method: comp(m)
.If the method has 100% coverage than the risk of change is considered to be equivalent only with the complexity of the method: C.R.A.P.(m) = comp(m)
.
If the method has 0% coverage than the risk of change is considered to be a second degree polinomial in the complexity measure (reasoning being that if you can't test a code path changing it increases risk of breakage): C.R.A.P.(m) = comp(m)^2 + comp(m)
Hopefully this will help you.
I just noticed that I only provide the half answer (the read part). The how to improve it should be pretty clear if you understand the reasoning of the index. But a much more clear explanation is given in @edorian's answer.
The short story is: write tests until you have near 100% coverage and after that refactor the methods to decrease the cyclomatic complexity. You can try to refactor before having tests but depending on the actual method complexity you risk introducing breakage if you can't reason (because of the complexity involved) all the consequences of the change you are doing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With