sorry if dumb but could not find an answer.
#include <iostream>
using namespace std;
int main()
{
double a(0);
double b(0.001);
cout << a - 0.0 << endl;
for (;a<1.0;a+=b);
cout << a - 1.0 << endl;
for (;a<10.0;a+=b);
cout << a - 10.0 << endl;
cout << a - 10.0-b << endl;
return 0;
}
Output:
0
6.66134e-16
0.001
-1.03583e-13
Tried compiling it with MSVC9, MSVC10, Borland C++ 2010. All of them arrive in the end to the error of about 1e-13. Is it normal to have such a significant error accumulation over only a 1000, 10000 increments?
Yes, this is normal numeric representation floating point error. It has to do with the fact that the hardware must approximate most floating point numbers, rather than storing them exactly. Thus, the compiler you use should not matter.
What Every Computer Scientist Should Know About Floating-Point Arithmetic
This is why when using a floating point error you should never do:
if( foo == 0.0 ){
//code here
}
and instead do
bool checkFloat(float _input, float _compare, float _epsilon){
return ( _input + _epsilon > _compare ) && ( _input - _epsilon < _compare );
}
think about this. every operation introduces slight error, but next operation uses slightly faulty result. given enough iterations, you will deviate from true result. if you like, write your expressions in the form t0 = (t + y + e), t1 = (t0 + y +e)
and figure out terms with epsilon. from their terms you can estimate approximate error.
there is also second source of error: at some point you are combining relatively small and relatively large numbers, towards the end. if you recall definition of machine precision, 1 + e = 1
, at some point operations will be losing significant bits.
Hopefully this helps to clarify in laymen terms
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With