My boss thinks that any code we write ( in C/C++) has to comply to the standards specified by a static analysis tool(like MISRA/Lint). My take on this is since compilers are well developed today, is this really required ?
Question here is how effective is this static analysis these days?
Static code analysis is performed early in development, before software testing begins. For organizations practicing DevOps, static code analysis takes place during the “Create” phase. Static code analysis also supports DevOps by creating an automated feedback loop.
Static analysis, also called static code analysis, is a method of computer program debugging that is done by examining the code without executing the program. The process provides an understanding of the code structure and can help ensure that the code adheres to industry standards.
Static analysis tools are known to produce false positives and these false positives can “outweigh” the true positives in volume [33]. Another known fact is that, especially with larger projects, the number of warnings produced by a tool can be high, sometimes in the thousands [9].
There are things that static analysis can't identify. For instance, static analysis can't detect whether software requirements have been fulfilled or how a function will execute. You'll need dynamic testing for that. That's why static analysis and dynamic testing are complementary.
Short answer: Yes.
Long answer: Compilers are indeed getting better at analysing certain kinds of "mistakes", but the "depth" that they work to is typically a lot less than proper tools for this. In particular tools that work across compile units, such as Coverity, which can understand (for example) that a function may return a pointer as NULL
, and if it does, your code will crash because you don't check for NULL
before accessing the pointer.
Static analysis tools can also do checking for lock usage, which the compiler typically can't.
As to "how effective" it is really does depend on both which tool you are using, what settings, and how well you otherwise test the code. So, "code coverage" comes into it too. Do you go through every branch of your code in your testing? With every possible value that causes a difference in behaviour? Static analysis tools can detect errors that your testing may not cover.
(Obviously, whether it actually makes sense in your particular business is a completely different discussion - that's for your boss, and his/her bosses to decide)
It is not that compilers cannot do the analysis that static analysis tools do. The problem is that static code analysis takes a significant amount of time and is usually not needed for every single compilation. Compilers are generally optimized for a balance of code quality and compilation time. If a compiler happens to stumble on an error in your code it will tell you, but it has no time to actively look for bugs.
Also, static analysis is important to generate metrics.
Those metrics can show : cyclomatic complexity, the depth of your class inheritance and many others / dependencies graph / % of comments and much (Understand has a complete list of features for example ).
A good point too is that most static analysis tools allows you to add specific rules which can be useful if your project / company have coding rules.
You can also add a "Continuous integration" server checkout-ing your svn/git/other development branch every day and doing the analysis by night. This way, next day you can fix the code that isn't compliant to your ruleset.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With