Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Floating point calculation change depending on the compiler

When I run the exact same code performing the exact same floating point calculation (using doubles) compiled on Windows and Solaris, I get slightly different results.

I am aware that the results are not exact due to rounding errors. However I would have expected the rounding errors to be platform-independent, thereby giving be the same (slightly incorrect) result on both platforms, which is not the case.

Is this normal, or do I have another problem in my code?

like image 257
Laurent Avatar asked Mar 04 '10 01:03

Laurent


Video Answer


2 Answers

On x86, usually most calculations happen with 80-bit quantities, unless otherwise forced to be double-precision. Most other architectures I know of do all calculations in double-precision (again, unless otherwise overridden).

I don't know if you're running Solaris on SPARC or x86, but if the former, then I highly suspect that to be the cause of the difference.

like image 57
Chris Jester-Young Avatar answered Sep 19 '22 01:09

Chris Jester-Young


I believe that under Windows/x86, your code will run with the x87 precision already set to 53 bits (double precision), though I'm not sure exactly when this gets set. On Solaris/x86 the x87 FPU is likely to be using its default precision of 64 bits (extended precision), hence the difference.

There's a simple check you can do to detect which precision (53 bits or 64 bits) is being used: try computing something like 1e16 + 2.9999, while being careful to avoid compiler constant-folding optimizations (e.g., define a separate add function to do the addition, and turn off any optimizations that might inline functions). When using 53-bit precision (SSE2, or x87 in double-precision mode) this gives 1e16 + 2; when using 64-bit precision (x87 in extended precision mode) this gives 1e16 + 4. The latter result comes from an effect called 'double rounding', where the result of the addition is rounded first to 64 bits, and then to 53 bits. (Doing this calculation directly in Python, I get 1e16 + 4 on 32-bit Linux, and 1e16+2 on Windows, for exactly the same reason.)

Here's a very nice article (that goes significantly beyond the oft-quoted Goldberg's "What every computer scientist should know...") that explains some of the problems arising from the use of the x87 FPU:

http://hal.archives-ouvertes.fr/docs/00/28/14/29/PDF/floating-point-article.pdf

like image 27
Mark Dickinson Avatar answered Sep 19 '22 01:09

Mark Dickinson