Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Reproducibility of floating point operation result

Is it possible for an floating-point arithmetic operation to yield different results on different CPUs? By CPUs i mean all of x86 and x64. And by different results i mean even if only a single least important bit is different.. I need to know if I can use floating point operations on project where it's vital to have exactly the same results corresponding to same input on different machines.

Edit: added c++ tag.
Also to clarify: I need reproducible results run-time. I wouldn't expect identical results from different compilations.

like image 409
user1316208 Avatar asked Aug 14 '12 12:08

user1316208


1 Answers

In the gaming industry this is referred to as deterministic lockstep, and is very important for real-time networked games where the clients and server need to be in agreement about the state of physics objects (players, projectiles, deformable terrain etc).

According to Glenn Fiedler's article on Floating Point Determinism, the answer is "a resoundingly limp maybe"; if you run the same binary on the same architecture and restrict the use of features that are less well specified than basic floating-point, then you can get the same results. Otherwise, if you use different compilers, or allow your code to use SSE or 80-bit floating point, then results will vary between different executables and different machines.

Yosef Kreinin recommends:

  • scanning assembler output for algebraic optimisations and applying them to your source code;
  • suppressing fused multiply-add and other advanced instructions (e.g. the sin trigonometric function);
  • and using SSE or SSE2, or otherwise setting the FPU CSR to 64-bit. (Yes, this conflicts with Glenn Fiedler's recommendation.)

And of course, test your code on multiple different machines; take hashes of intermediate outputs, so you can tell just where and when your simulations are diverging.

like image 78
ecatmur Avatar answered Sep 30 '22 08:09

ecatmur