Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Metrics for measuring successful refactoring [closed]

Are there objective metrics for measuring code refactoring?

Would running findbugs, CRAP or checkstyle before and after a refactoring be a useful way of checking whether the code was actually improved rather than just changed?

I'm looking for metrics that can be can determined and tested for to help improve the code review process.

like image 611
sal Avatar asked Apr 28 '09 14:04

sal


2 Answers

Would running findbugs, CRAP or checkstyle before and after a refactoring be a useful way of checking if the code was actually improved rather than just changed?

Actually, as I have detailed in the question "What is the fascination with code metrics?", the trend of any metrics (findbugs, CRAP, whatever) is the true added value of metrics.
It (the evolution of metrics) allows you to prioritize the main fixing action you really need to make to your code (as opposed to blindly try to respect every metric out there)

A tool like Sonar can, in this domain (monitoring of metrics) can be very useful.


Sal adds in the comments:

The real issue is on checking what code changes add value rather than just adding change

For that, test coverage is very important, because only tests (unit tests, but also larger "functional tests") will give you a valid answer.
But refactoring should not be done without a clear objective anyway. To do it only because it would be "more elegant" or even "easier to maintain" may be not in itself a good reason enough to change the code.
There should be other measures like some bugs which will be fixed in the process, or some new functions which will be implemented much faster as a result of the "refactored" code.
In short, the added value of a refactoring is not solely measured with metrics, but should also be evaluated against objectives and/or milestones.

like image 94
VonC Avatar answered Oct 21 '22 21:10

VonC


Yes, several measures of code quality can tell you if a refactoring improves the quality of your code.

  • Duplication. In general, less duplication is better. However, duplication finders that I've used sometimes identify duplicated blocks that are merely structurally similar but have nothing to do with one another semantically and so should not be deduplicated. Be prepared to suppress or ignore those false positives.

  • Code coverage. This is by far my favorite metric in general, but it's only indirectly related to refactoring. You can and should raise low coverage by writing more tests, but that's not refactoring. However, you should monitor code coverage while refactoring (as with any other change to the code) to be sure it doesn't go down. Refactoring can improve code coverage by removing untested copies of duplicated code.

  • Size metrics such as lines of code, total and per class, method, function, etc. A Jeff Atwood post lists a few more. If a refactoring reduces lines of code while maintaining clarity, quality has increased. Unusually long classes, methods, etc. are likely to be good targets for refactoring. Be prepared to use judgement in deciding when a class, method, etc. really does need to be longer than usual to get its job done.

  • Complexity metrics such as cyclomatic complexity. Refactoring should try to decrease complexity and not increase it without a well thought out reason. Methods/functions with high complexity are good refactoring targets.

  • Robert C. Martin's package-design metrics: Abstractness, Instability and Distance from the abstractness-instability main sequence. He described them in his article on Stability in C++ Report and his book Agile Software Development, Principles, Patterns, and Practices. JDepend is one tool that measures them. Refactoring that improves package design should minimize D.

I have used and continue to use all of these to monitor the quality of my software projects.

like image 29
Dave Schweisguth Avatar answered Oct 21 '22 20:10

Dave Schweisguth