If I run the statement
Math.Exp(113.62826122038274).ToString("R")
on a machine with .net 4.5.1 installed, then I get the answer
2.2290860617259248E+49
However, if I run the same command on a machine with .net framework 4.5.2 installed, then I get the answer
2.2290860617259246E+49
(i.e. the final digit changes)
I realise that this is broadly insignificant in pure numeric terms, but does anyone know of any changes that have been made in .net 4.5.2 that would explain the change?
(I don't prefer one result to the other, I am just interested to understand why it has changed)
If I output
The input in roundtrip format The input converted to a long via BitConverter.DoubleToInt64Bits Math.Exp in roundtrip format Math.Exp converted to a long via BitConverter.DoubleToInt64Bits
then on 4.5.1 I get
113.62826122038274 4637696294982039780 2.2290860617259248E+49 5345351685623826106
and on 4.5.2 I get:
113.62826122038274 4637696294982039780 2.2290860617259246E+49 5345351685623826105
So for the exact same input, I get a different output (as can be seen from the bits so no roundtrip formatting is involved)
More details:
Compiled once using VS2015
Both machines that I am running the binaries on are 64bit
One has .net 4.5.1 installed, the other 4.5.2
Just for clarity: the string conversion is irrelevant... I get the change in results regardless of whether string conversion is involved. I mentioned that purely to demonstrate the change.
Sigh, the mysteries of floating point math continue to stump programmers forever. It does not have anything to do with the framework version. The relevant setting is Project > Properties > Build tab.
Platform target = x86: 2.2290860617259248E+49
Platform target = AnyCPU or x64: 2.2290860617259246E+49
If you run the program on a 32-bit operating system then you always get the first result. Note that the roundtrip format is overspecified, it contains more digits than a double can store. Which is 15. Count them off, you get 16. This ensures that the binary representation of the double, the 1s and 0s are the same. The difference between the two values is the least significant bit in the mantissa.
The reason that the LSB is not the same is because the x86 jitter is encumbered with generating code for the FPU. Which has the very undesirable property of using more bits of precision than a double can store. 80 bits instead of 64. Theoretically to generate more accurate calculation results. Which it does, but rarely in a reproducible way. Small changes to the code can produce large changes in the calculation result. Just running the code with a debugger attached can change the result since that disables the optimizer.
Intel fixed this mistake with the SSE2 instruction set, completely replacing the floating point math instructions of the FPU. It does not use extra precision, a double always has 64 bits. With the highly desirable property that the calculation result now no longer depends on intermediate storage, it is now much more consistent. But less accurate.
That the x86 jitter uses FPU instructions is a historical accident. Released in 2002, there were not enough processors around that supported SSE2. That accident cannot be fixed anymore since it changes the observable behavior of a program. It was not a problem for the x64 jitter, a 64-bit processor is guaranteed to also support SSE2.
A 32-bit process uses the exp() function that uses FPU code. A 64-bit process uses the exp() function that uses SSE code. The result may be different by one LSB. But still accurate to 15 significant digits, it is 2.229086061725925E+49. All you can ever expect out of math with double.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With