We have some unit tests that check the result of the solution of linear system of equation, comparing floating point numbers with a delta.
Trying to adjust the delta, I noticed that the same number changes slightly between Visual Studio Run test
and Debug test
modes.
Why does this happen? When I debug a test the #if DEBUG
sections are disabled, therefore the executed code should be the same.
Thanks.
Testing is the process to find bugs and errors. Debugging is the process to correct the bugs found during testing. It is the process to identify the failure of implemented code. It is the process to give the absolution to code failure.
Run simply launches the application (regardless of what the flavor is). Debug essentially does the same thing but will stop at any breakpoints that you might have set ...
One of the main reasons that the debug version is significantly slower is because of these extra diagnostics. as to why you want to run in Debug, it's because those extra diagnostics are doing lots of useful stuff that help you catch bugs in your program so that you have more chance of the release build working.
Debug Mode: In debug mode the application will be slow. Release Mode: In release mode the application will be faster.
For a simple example of code that produces different results between a typical DEBUG and RELEASE build (unoptimized vs. optimized), try this in LINQPad:
void Main()
{
float a = 10.0f / 3;
float b = 10;
b /= 3;
(a == b).Dump();
(a - b).Dump();
}
If you execute this with optimizations on (make sure the little button all the way down to the right in the LINQPad window is turned to "/o+"), you'll get this result:
False
-7,947286E-08
If you disable it, turn off optimizations, you get this:
True
0
Note that the produced IL code is the same:
Note that the addresses differ, this might indicate that there are things here other than just pure IL, though I have no idea what that might be.
There are all sorts of things that can impact floating point computation, the most significant of which is whether it actually writes the value to a local/field or not. It is possible that for the optimized build, the JIT is able to keep the value in a register - the FPU registers are 80 bits wide, to minimize cumulative errors. If it needs to actually write the value down to a 32-bit (float
) or 64-bit (double
) local or field, it will by necessity lose some of this. So yes, if it can do all the work in registers - it can give a different (usually more "correct") result than if it writes the intermediate values to locals etc.
There are other available registers too, but I doubt these are in use here: XMM/SSE registers are 128 bit; SIMD can (depending on the machine) be up to 512 bit.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With