One of the questions that I asked some time ago had undefined behavior, so compiler optimization was actually causing the program to break.
But if there is no undefined behavior in you code, then is there ever a reason not to use compiler optimization? I understand that sometimes, for debugging purposes, one might not want optimized code (please correct me if I am wrong). Other than that, on production code, why not always use compiler optimization?
Also, is there ever a reason to use, say, -O
instead of -O2
or -O3
?
In computing, an optimizing compiler is a compiler that tries to minimize or maximize some attributes of an executable computer program. Common requirements are to minimize a program's execution time, memory footprint, storage size, and power consumption (the last three being popular for portable computers).
The optimization must be correct, it must not, in any way, change the meaning of the program. Optimization should increase the speed and performance of the program. The compilation time must be kept reasonable. The optimization process should not delay the overall compiling process.
Optimization is necessary in the code generated by simple code generator due to the following reasons: Code optimization enhances the portability of the compiler to the target processor. Code optimization allows consumption of fewer resources (i.e. CPU, Memory). Optimized code has faster execution speed.
If there is no undefined behavior, but there is definite broken behavior (either deterministic normal bugs, or indeterminate like race-conditions), it pays to turn off optimization so you can step through your code with a debugger.
Typically, when I reach this kind of state, I like to do a combination of:
If the bug is more devious, I pull out valgrind and drd, and add unit-tests as needed, both to isolate the problem and ensure that to when the problem is found, the solution works as expected.
In some extremely rare cases, the debug code works, but the release code fails. When this happens, almost always, the problem is in my code; aggressive optimization in release builds can reveal bugs caused by mis-understood lifetimes of temporaries, etc... ...but even in this kind of situation, having a debug build helps to isolate the issues.
In short, there are some very good reasons why professional developers build and test both debug (non-optimized) and release (optimized) binaries. IMHO, having both debug and release builds pass unit-tests at all times will save you a lot of debugging time.
Compiler optimisations have two disadvantages:
Some of the optimisations performed by -O3 can result in larger executables. This might not be desirable in some production code.
Another reason to not use optimisations is that the compiler that you are using may contain bugs that only exist when it is performing optimisation. Compiling without optimisation can avoid those bugs. If your compiler does contain bugs, a better option might be to report/fix those bugs, to change to a better compiler, or to write code that avoids those bugs completely.
If you want to be able to perform debugging on the released production code, then it might also be a good idea to not optimise the code.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With