So I have a bit of a WTF on my hands: Double precision math is returning different results based on which thread it runs on.
Code:
double d = 312554083.518955;
Console.WriteLine(d);
d += 0.1d;
Console.WriteLine(d);
d = 2554083.518955;
Console.WriteLine(d);
d += 0.1d;
Console.WriteLine(d);
This prints:
312554083,518955
312554080
2554083,518955
2554083,5
but if I execute it on a sparkling new thread it returns:
312554083,518955
312554083,618955
2554083,518955
2554083,618955
(Which, you know, is the correct results)
As you can see, something cuts off anything past eight digits, be it decimals or digits. I am running a fair bit of native code on the thread that is returning incorrect results (DirectX (SlimDX), Freetype2, FMOD); maybe they're configuring the FPU to do this or something. This code, however, is pure C# - and the MSIL it compiles to is the same regardless of which thread it runs on.
Has anyone seen something like this before? What can the cause be?
Yes, DirectX does indeed change things - it sets a bit on the FPU to change how it deals with arithmetic. That's going to be the cause of the issue - although the normal warnings of expecting decimal arithmetic to give "accurate" results when dealing with binary floating point numbers still applies, of course. The numbers you've shown aren't the exact values of the doubles in the first place.
If you want to avoid DirectX changing things, have a look at this question, and in particular this bit of Greg's answer:
You can tell Direct3D not to mess with the FPU precision by passing the
D3DCREATE_FPU_PRESERVE
flag to CreateDevice. There is also a managed code equivalent to this flag (CreateFlags.FpuPreserve
) if you need it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With