I am using a debug build and obtaining different results on the same machine whether I run under the debugger or not. I am using the excellent TestDriven.Net to run the unit tests.
The code is
I have not tracked back to the first difference (tricky without a debugger!) but given how iterative the code is, its input sensitive and the tiniest difference will grow to significant proportions given enough time.
I am aware of how fragile fp reproducibility is across compilers, platforms and architectures but disappointed to find the debugger is one of the factors to throw this off.
Do I have just have to accept this as a fact of life or is there any advice you can offer?
Do I have just have to accept this as a fact of life or is there any advice you can offer?
You have to accept it as a fact of life. Floating point code can be optimized differently in different situations. In particular, in some cases the JIT compiler can use a representation with more precision/accuracy (e.g. 80-bit floating point) for operations. The situations under which the JIT compiler will do this will depend on the architecture, optimization settings etc. There can be any number of subtleties about what you do with a variable (and whether it's a local variable or not) which can affect this. Running under a debugger affects JIT optimization settings very significantly in general - not just for floating point - so I'm not at all surprised by this.
If you perform floating point comparisons with a certain tolerance, it should be fine - it's very rarely a good idea to do exact equality comparisons on floating point types anyway. Of course it's possible that you're actually performing a non-equality comparison where the differences become significant, but I've rarely come across that as a problem.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With