How can I guarantee that floating point calculations in a .NET application (say in C#) always produce the same bit-exact result? Especially when using different versions of .NET and running on different platforms (x86 vs x86_64). Inaccuracies of floating point operations do not matter.
In Java I'd use strictfp. In C/C++ and other low level languages this problem is essentially solved by accessing the FPU / SSE control registers but that's probably not possible in .NET.
Even with control of the FPU control register the JIT of .NET will generate different code on different platforms. Something like HotSpot would be even worse in this case...
Why do I need it? I'm thinking about writing a real-time strategy (RTS) game which heavily depends on fast floating point math together with a lock stepped simulation. Essentially I will only transmit user input across the network. This also applies to other games which implement replays by storing the user input.
Not an option are:
Any ideas?
I'm not sure of the exact answer for your question but you could use C++ and do all your float work in a c++ dll and then return the result to .Net through an interopt.
Bitexact results for different platforms are a pain in the a**. If you only use x86, it should not matter because the FPU does not change from 32 to 64bit. But the problem is that the transcendental functions may be more accurate on new processors.
The four base operations should not give different results, but your VM may optimize expressions and that may give different results. So as Ants proposed, write your add/mul/div/sub routines as unmanaged code to be on the safe side.
For the transcendental functions I am afraid you must use a lookup table to guarantee bit exactness. Calculate the result of e.g. 4096 values, store them as constants and if you need a value between them, interpolate. This does not give you great accuracy, but it will be bitexact.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With