I need some clearification about floating point math.
I have wrote some code for the learning purpouses:
#include "stdio.h"
int main (int argc, char const *argv[])
{
int i;
double a=1.0/10.0;
double sum=0;
for(i = 0; i < 10; ++i)
sum+=a;
printf("%.17G\n", 10*a );
printf("%d\n", (10*a == 1.0) );
printf("%.17G\n", sum );
printf("%d\n", (sum == 1.0) );
return 0;
}
and the output it gives is:
1
1
0.99999999999999989
0
Why (sum == 1.0) - is false is pretty understandabale, but why multiplying gives the right answer without the error?
Thanks.
If you look at the actual assembly language produced, you'll find that the compiler is not generating the single multiplication you're asking for. Instead, it's simply providing the actual value. If you turn off optimization you might get the results you're expecting (unless your compiler optimizes this anyway).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With