I don't understand exactly the following:
When using debugging and optimization together, the internal rearrangements carried out by the optimizer can make it difficult to see what is going on when examining an optimized program in the debugger. For example, the ordering of statements may be changed.
What i understand is when i build a program with the -g
option, then the executable will contain a symbolic table which contains variable, function names, references to them and their line-numbers.
And when i build with an optimization option, for example the ordering of instructions may be changed depends on the optimization.
What i don't understand is, why debugging is more difficult. I would like to see an example, and an easy to understand explanation.
An example that might happen:
int calc(int a, int b)
{
return a << b + 7;
}
int main()
{
int x = 5;
int y = 7;
int val = calc(x, y);
return val;
}
Optimized this might be the same as
int main()
{
return 642;
}
A contrived example, but trying to debug that kind of optimization in actual code isn’t simple. Some debuggers may show all lines of code marked when stepping through, some might skip them all, some may be confused. And the developer at least is.
simple example:
int a = 4;
int b = a;
int c = b;
printf("%d", c);
can be optimized as:
printf("%d", 4);
In fact in optimized compiles, the compiler might well do exactly this (in machine code of course)
When debugging we the debugger will allow us to inspect the memory associated by a,b and c but when the top version get optimized into the bottom version a,b and c no longer exist in RAM. This makes inspecting RAM a lot harder to figure out what is going on.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With