Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

sine result depends on C++ compiler used

I use the two following C++ compilers:

  • cl.exe : Microsoft (R) C/C++ Optimizing Compiler Version 19.00.24210 for x86
  • g++ : g++ (Ubuntu 5.2.1-22ubuntu2) 5.2.1 20151010

When using the built-in sine function, I get different results. This is not critical, but sometimes results are too significants for my use. Here is an example with a 'hard-coded' value:

printf("%f\n", sin(5451939907183506432.0));

Result with cl.exe:

0.528463

Result with g++:

0.522491

I know that g++'s result is more accurate and that I could use an additional library to get this same result, but that's not my point here. I would really understand what happens here: why is cl.exe that wrong?

Funny thing, if I apply a modulo of (2 * pi) on the param, then I get the same result than g++...

[EDIT] Just because my example looks crazy for some of you: this is a part of a pseudorandom number generator. It is not important to know if the result of the sine is accurate or not: we just need it to give some result.

like image 683
Nicolas Avatar asked Oct 12 '17 13:10

Nicolas


3 Answers

You have a 19-digit literal, but double usually has 15-17 digit precision. As a result, you can get a small relative error (when converting to double), but big enough (in the context of sine calculation) absolute error.

Actually, different implementations of the standard library have differences in treating such large numbers. For example, in my environment, if we execute

std::cout << std::fixed << 5451939907183506432.0;

g++ result would be 5451939907183506432.000000
cl result would be 5451939907183506400.000000

The difference is because versions of cl earlier than 19 have a formatting algorithm that uses only a limited number of digits and fills the remaining decimal places with zero.

Furthermore, let's look at this code:

double a[1000];
for (int i = 0; i < 1000; ++i) {
    a[i] = sin(5451939907183506432.0);
}
double d = sin(5451939907183506432.0);
cout << a[500] << endl;
cout << d << endl; 

When executed with my x86 VC++ compiler the output is:

0.522491
0.528463

It appears that when filling the array sin is compiled to the call of __vdecl_sin2, and when there is a single operation, it is compiled to the call of __libm_sse2_sin_precise (with /fp:precise).

In my opinion, your number is too large for sin calculation to expect the same behavior from different compilers and to expect the correct behavior in general.

like image 79
DAle Avatar answered Oct 19 '22 20:10

DAle


I think Sam's comment is closest to the mark. Whereas you're using a recentish version of GCC/glibc, which implements sin() in software (calculated at compile time for the literal in question), cl.exe for x86 likely uses the fsin instruction. The latter can be very imprecise, as described in the Random ASCII blog post, "Intel Underestimates Error Bounds by 1.3 quintillion".

Part of the problem with your example in particular is that Intel uses an imprecise approximation of pi when doing range reduction:

When doing range reduction from double-precision (53-bit mantissa) pi the results will have about 13 bits of precision (66 minus 53), for an error of up to 2^40 ULPs (53 minus 13).

like image 16
SloopJon Avatar answered Oct 19 '22 19:10

SloopJon


According to cppreference:

The result may have little or no significance if the magnitude of arg is large (until C++11)

It's possible that this is the cause of the problem, in which case you will want to manually do the modulo so that arg is not large.

like image 9
SirGuy Avatar answered Oct 19 '22 20:10

SirGuy