Consider the following simple program:
var dblMax = Double.MaxValue;
var result = (dblMax * 1000) / 1800;
Console.WriteLine(result);
When I build this in Debug mode and run (Ctrl+F5) or debug (F5) it, it prints 9.987140856842E+307
.
When I switch to Release mode and run (Ctrl+F5) it, it prints 8
for infinity.
I understand that this difference is due to some compiler optimization which is done in Release mode.
However, if I debug (F5) the same build in Release mode, it prints 9.987140856842E+307
again!
How does the fact that I am debugging change the result of the calculation?
Edit:
I do not ask why debug mode and release mode yield different results. I wonder why release mode is yielding different results depending on whether I debug (F5) or not (Ctrl+F5).
When debugging the JITter behaves different.
For one thing, local variables will in many cases have their lifetimes changed in order to be inspectable. Consider hitting a breakpoint after a variable was used during a calculation. If the JITter knows the variable is not going to be used after the expression, and it didn't prolong the lifetime of the variable, you could end up not being able to look at that variable, which is a core feature of debugging.
The JITer has a very clear knowledge about when a variable is useful to still have lying around. If during that time a register is available it might end up using this register to store the variable in.
However, with the debugger attached it might decide to instead use a memory location because the lifetime changed enough so that a register isn't available for that part of the code.
Floating point registers of the CPU have higher precision than the corresponding floating point storage formats, which means that once you either lift a value out of a register and into memory, or simply store it in memory the whole time, you will experience lower precision.
The difference between RELEASE and DEBUG build can end up dictating these things, as can the presence of a debugger.
Additionally, there can be differences between the different .NET runtime versions which can affect this.
Writing floating point code correctly requires intimate knowledge about what you are attempting to do and how the various parts of the machine and platform will interfere. I would try to avoid writing code like this.
That is strictly related to the floating point precision. In debug mode, the compiler uses 80-bit precision. In release mode the compiler uses 64-bit truncated results.
When this is going to happen or not, depends on several configuration, settings and environment variables. For example, you can turno off optimizations on your configurations for release mode. That should help.
Take a look at this Jon Skeet answer: https://stackoverflow.com/a/18417944/637840
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With