To make the problem short let's say I want to compute the expression a / (b - c)
on float
s.
To make sure the result is meaningful, I can check if b
and c
are in equal:
float EPS = std::numeric_limits<float>::epsilon();
if ((b - c) > EPS || (c - b) > EPS)
{
return a / (b - c);
}
but my tests show it is not enough to guarantee either meaningful results nor not failing to provide a result if it is possible.
a = 1.0f;
b = 0.00000003f;
c = 0.00000002f;
Result: The if condition is NOT met, but the expression would produce a correct result 100000008 (as for the floats' precision).
a = 1e33f;
b = 0.000003;
c = 0.000002;
Result: The if condition is met, but the expression produces not a meaningful result +1.#INF00
.
I found it much more reliable to check the result, not the arguments:
const float INF = numeric_limits<float>::infinity();
float x = a / (b - c);
if (-INF < x && x < INF)
{
return x;
}
But what for is the epsilon then and why is everyone saying epsilon is good to use?
Machine epsilon (ϵm) is defined as the distance (gap) between 1 and the next largest floating point number. In programming languages these values are typically available as predefined constants. For example, in C, these constants are FLT_EPSILON and DBL_EPSILON and are defined in the float.
This means that floating point numbers have between 6 and 7 digits of precision, regardless of exponent. That means that from 0 to 1, you have quite a few decimal places to work with. If you go into the hundreds or thousands, you've lost a few.
The smaller the epsilon's value, the greater the comparison accuracy. However, if we specify the tolerance value too small, we'll get the same false result as in the simple == comparison. In general, epsilon's value with 5 and 6 decimals is usually a good place to start.
Floating-point decimal values generally do not have an exact binary representation due to how the CPU represents floating point data. For this reason, you may experience a loss of precision, and some floating-point operations may produce unexpected results.
Epsilon is used to determine whether two numbers subject to rounding error are close enough to be considered "equal". Note that it is better to test fabs(b/c - 1) < EPS
than fabs(b-c) < EPS
, and even better — thanks to the design of IEEE floats — to test abs(*(int*)&b - *(int*)&c) < EPSI
(where EPSI is some small integer).
Your problem is of a different nature, and probably warrants testing the result rather than the inputs.
"you must use an epsilon when dealing with floats" is a knee-jerk reaction of programmers with a superficial understanding of floating-point computations, for comparisons in general (not only to zero).
This is usually unhelpful because it doesn't tell you how to minimize the propagation of rounding errors, it doesn't tell you how to avoid cancellation or absorption problems, and even when your problem is indeed related to the comparison of two floats, it doesn't tell you what value of epsilon is right for what you are doing.
If you have not read What Every Computer Scientist Should Know About Floating-Point Arithmetic, it's a good starting point. Further than that, if you are interested in the precision of the result of the division in your example, you have to estimate how imprecise b-c
was made by previous rounding errors, because indeed if b-c
is small, a small absolute error corresponds to a large absolute error on the result. If your concern is only that the division should not overflow, then your test (on the result) is right. There is no reason to test for a null divisor with floating-point numbers, you just test for overflow of the result, which captures both the cases where the divisor is null and where the divisor is so small as to make the result not representable with any precision.
Regarding the propagation of rounding errors, there exists specialized analyzers that can help you estimate it, because it is a tedious thing to do by hand.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With