I have algorithm/computation in Java and unit test for it. The unit test expects result with some precision/delta. Now I ported the algo into .NET and would like to use same unit test. I work with double data type.
The problem is that Java uses strictfp (64bits) for some operations in Math class. Where as .NET uses FPU/CPU always (80 bits). .NET is more precise and faster. Java is more predictable.
Because my algo is cyclic and reuses the results from previous round, the error/difference/more-precision accumulates too big. I don't rely on speed (for unit test). And I'm happy to use .NET precision in production, but I would like to validate the implementation.
Consider this from JDK
public final class Math {
public static double atan2(double y, double x) {
return StrictMath.atan2(y, x); // default impl. delegates to StrictMath
}
}
I'm looking for library or technique to use strict FP in .NET.
Preemptive comment: I do understand IEEE 754 format and the fact that floating point number is not exact decimal number or fraction. No Decimal, no BigInt or BigNumber. Please don't answer this way, thanks.
I have extensively researched this issue earlier this year as I wanted to know if it was possible to base a multiplayer simulation on floating-point arithmetic in .NET. Some of my findings may be useful to you:
It is possible to emulate "strict" mode by inserting redundant casts everywhere, but this appears to be a brittle, C#-specific and tedious solution.
The 32-bit JIT emits x87 instructions, but the 64-bit JIT emits SSE instructions. This is true both on Microsoft and Mono's implementation. Unlike x87, SSE floating arithmetic is reproducible.
I believe System.Math simply calls the equivalent C runtime functions, although I've been unable to step into the assembly to verify this (if someone knows how to do this, please check!). The C runtime uses SSE versions of transcendental functions when possible, except in a few cases, notably sqrt (but writing a wrapper via instrinsics for that is trivial). By the very nature of SSE these must be reproducible. It is programatically possible to determine whether the C runtime is using its SSE implementation rather than x87.
For the remaining transcendental functions not available in SSE (fmod, sinh, cosh, tanh), it might be that they do not cause reproducibility issues if no further operation is performed on x87 with their result.
So in short, sticking with 64-bit CLR should solve the problem for arithmetic; for transcendental functions, most are implemented in SSE already and I'm not even sure that that's necessary if you don't perform any x87 arithmetic with the results.
Unfortunately there is no way to enforce FP strictness in C#, the .Net CLR just lacks the ability to do calculations with less precision that the maximum that is can.
I think there's a performance gain for this - it doesn't check that you might want less precision. There's also no need - .Net doesn't run in a virtual machine and so doesn't worry about different floating point processors.
However isn't strictfp
optional? Could you execute your Java code without the strictfp
modifier? Then it should pick up the same floating point mechanism as .Net
So instead of forcing .Net to use strictfp
and checking it comes out with the same values as your Java code you could force Java to not use strictfp
and check that it then comes out the same as the .Net code.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With