I'm looking into why a test case is failing
The problematic test can be reduced to doing (4.0/9.0) ** (1.0/2.6)
, rounding this to 6 digits and checking against a known value (as a string):
#include<stdio.h>
#include<math.h>
int main(){
printf("%.06f\n", powf(4.0/9.0, (1.0/2.6)));
}
If I compile and run this in gcc 4.1.2 on Linux, I get:
0.732057
Python agrees, as does Wolfram|Alpha:
$ python2.7 -c 'print "%.06f" % (4.0/9.0)**(1/2.6)'
0.732057
However I get the following result on gcc 4.4.0 on Linux, and 4.2.1 on OS X:
0.732058
A double
acts identically (although I didn't test this extensively)
I'm not sure how to narrow this down any further.. Is this a gcc regression? A change in rounding algorithm? Me doing something silly?
Edit: Printing the result to 12 digits, the digit at the 7th place is 4 vs 5, which explains the rounding difference, but not the value difference:
gcc 4.1.2:
0.732057452202
gcc 4.4.0:
0.732057511806
Here's the gcc -S
output from both versions: https://gist.github.com/1588729
Recent gcc version are able to use mfpr to do compile time floating point computation. My guess is that your recent gcc does that and use an higher precision for the compile time version. This is allowed by the at least the C99 standard (I've not looked in other one if it was modified)
6.3.1.8/2 in C99
The values of floating operands and of the results of floating expressions may be represented in greater precision and range than that required by the type; the types are not changed thereby.
Edit: your gcc -S results confirm that. I haven't checked the computations, but the old one has (after substituting memory for its constant content)
movss 1053092943, %xmm1
movss 1055100473, %xmm0
call powf
calling powf with the precomputed values for 4/9.0 and 1/2.6 and then printing the result after promotion to double, while the new one just print the float 0x3f3b681f promoted to double.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With