Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Slightly different result from exp function on Mac and Linux

The following C program produces different results on my Mac and on Linux. I'm suprised because I assumed that the implementation of libm is somehow standardized

#include<math.h>
#include<stdio.h>

int main()
{
  double x18=-6.899495205106946e+01;
  double x19 = exp(-x18);
  printf("x19     = %.15e\n", x19);
  printf("x19 hex = %llx\n", *((unsigned long long *)(&x19)));
}

The output on Mac is

x19     = 9.207186811339878e+29
x19 hex = 46273e0149095886

and on Linux

x19     = 9.207186811339876e+29
x19 hex = 46273e0149095885

Both were compiled without any optimization flags as follows:

gcc -lm ....

I know that I never should compare floats to be excatly the same.

This issue came up during debugging, regrettably the algorithm using this calculation proofs to be numerically unstable and this slight difference leads to significant deviations in the final result. But this is a different problem.

I'm just surprised that such basic operations as exp are not standardized as I can expect for the basic algebraic operations specified by IEEE 754.

Are there any assumptions about precision I can rely on for different implementations of libm for different machines or for different versions ?


Because of the discussion below I used mpmath to compute the value with higher than machine precision and I get with two more figures the result 9.2071868113398768244, so for both of my results the last figure is already wrong. The result on linux can be explained by down rounding this value, the Mac result is also off if the computer uses up rounding.

like image 373
rocksportrocker Avatar asked Jun 26 '17 17:06

rocksportrocker


1 Answers

The C99 Specification states (other version should be similar):

J.3 Implementation-defined behavior

1 A conforming implementation is required to document its choice of behavior in each of the areas listed in this subclause. The following are implementation-defined:

...

J.3.6 Floating point

— The accuracy of the floating-point operations and of the library functions in <math.h> and <complex.h> that return floating-point results (5.2.4.2.2).

Meaning GNU libm and BSD libm are free to have different levels of accuracy. Likely what is happening, is that the BSD implemention on OSX rounds to the nearest (unit in the last place) ULP, and the GNU implementation truncates down to the next ULP.

like image 84
dlasalle Avatar answered Nov 15 '22 13:11

dlasalle