I have an implementation of an algorithm in C which uses floats. When I compile and run on i386 I get different results to the results I get when I compile and run on armel. In particular division-of-int-by-float, results in a different float.
I've extracted some code from my algorithm to demonstrate this problem:
#include <stdio.h>
void main(void)
{
float x = 4.80000019;
float y = 4.80000019;
int a = 38000;
int b = 10000;
int result = (a/x)+(b/y);
printf("%.8f, %.8f\n", x, y); // same on i386 and armel
printf("%f, %f\n", a/x, b/y); // slightly different on each
printf("%d\n", result); // prints 9999 on i386, and 10000 on armel
}
Can anybody explain why the two platforms generate different results?
Alex
Lookup 'excess precision'. To suppress it on a modern x86, compile with -msse2 -mfpmath=sse
.
Can't test it for arm right now, but it have different results even on i386 and amd64 - on the same CPU, but compiled with -m32 flag. That's because of internal FPU structure - i387 uses 80bit floating point registers to perform operations, and then shrinks result back to 32bit floats (if requested). In amd64 insctruction set, SSE is used instead, which don't have that precise registers (but still it _at_least_ 32 bit). I suppose ARM have at least 32bits too, but anything above this is not guaranteed.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With