The value of this constant is 4.94065645841247e-324. Two apparently equivalent floating-point numbers might not compare equal because of differences in their least significant digits.
To compare two floating point or double values, we have to consider the precision in to the comparison. For example, if two numbers are 3.1428 and 3.1415, then they are same up to the precision 0.01, but after that, like 0.001 they are not same.
Machine Epsilon is a machine-dependent floating point value that provides an upper bound on relative error due to rounding in floating point arithmetic. Mathematically, for each floating point type, it is equivalent to the difference between 1.0 and the smallest representable value that is greater than 1.0.
Assuming 64-bit IEEE double, there is a 52-bit mantissa and 11-bit exponent. Let's break it to bits:
1.0000 00000000 00000000 00000000 00000000 00000000 00000000 × 2^0 = 1
The smallest representable number greater than 1:
1.0000 00000000 00000000 00000000 00000000 00000000 00000001 × 2^0 = 1 + 2^-52
Therefore:
epsilon = (1 + 2^-52) - 1 = 2^-52
Are there any numbers between 0 and epsilon? Plenty... E.g. the minimal positive representable (normal) number is:
1.0000 00000000 00000000 00000000 00000000 00000000 00000000 × 2^-1022 = 2^-1022
In fact there are (1022 - 52 + 1)×2^52 = 4372995238176751616
numbers between 0 and epsilon, which is 47% of all the positive representable numbers...
The test certainly is not the same as someValue == 0
. The whole idea of floating-point numbers is that they store an exponent and a significand. They therefore represent a value with a certain number of binary significant figures of precision (53 in the case of an IEEE double). The representable values are much more densely packed near 0 than they are near 1.
To use a more familiar decimal system, suppose you store a decimal value "to 4 significant figures" with exponent. Then the next representable value greater than 1
is 1.001 * 10^0
, and epsilon
is 1.000 * 10^-3
. But 1.000 * 10^-4
is also representable, assuming that the exponent can store -4. You can take my word for it that an IEEE double can store exponents less than the exponent of epsilon
.
You can't tell from this code alone whether it makes sense or not to use epsilon
specifically as the bound, you need to look at the context. It may be that epsilon
is a reasonable estimate of the error in the calculation that produced someValue
, and it may be that it isn't.
There are numbers that exist between 0 and epsilon because epsilon is the difference between 1 and the next highest number that can be represented above 1 and not the difference between 0 and the next highest number that can be represented above 0 (if it were, that code would do very little):-
#include <limits>
int main ()
{
struct Doubles
{
double one;
double epsilon;
double half_epsilon;
} values;
values.one = 1.0;
values.epsilon = std::numeric_limits<double>::epsilon();
values.half_epsilon = values.epsilon / 2.0;
}
Using a debugger, stop the program at the end of main and look at the results and you'll see that epsilon / 2 is distinct from epsilon, zero and one.
So this function takes values between +/- epsilon and makes them zero.
An aproximation of epsilon (smallest possible difference) around a number (1.0, 0.0, ...) can be printed with the following program. It prints the following output: epsilon for 0.0 is 4.940656e-324
epsilon for 1.0 is 2.220446e-16
A little thinking makes it clear, that the epsilon gets smaller the more smaller the number is we use for looking at its epsilon-value, because the exponent can adjust to the size of that number.
#include <stdio.h>
#include <assert.h>
double getEps (double m) {
double approx=1.0;
double lastApprox=0.0;
while (m+approx!=m) {
lastApprox=approx;
approx/=2.0;
}
assert (lastApprox!=0);
return lastApprox;
}
int main () {
printf ("epsilon for 0.0 is %e\n", getEps (0.0));
printf ("epsilon for 1.0 is %e\n", getEps (1.0));
return 0;
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With