The machine epsilon is canonically defined as the smallest number which added to one, gives a result different from one.
There is a Double.Epsilon
but the name is very misleading: it is the smallest (denormalized) Double
value representable, and thus useless for any kind of numeric programming.
I'd like to get the true epsilon for the Double
type, so that not to have to hardcode tolerances into my program. How do I do this ?
Machine epsilon ϵ is the distance between 1 and the next floating point number. Machine precision u is the accuracy of the basic arithmetic operations. This number is also know as the unit roundoff. When the precision is p and the radix is β we have ϵ=β1−p.
Machine precision is the smallest number eps such that the difference between 1 and 1 + eps is nonzero, ie., it is the smallest difference between two numbers that the computer recognizes. On a 32 bit computer, single precision is 2-23 (approximately 10-7) while double precision is 2-52 (approximately10-16) .
For 64-bit representation of IEEE floating point numbers the significand length is 52 bits. Thus, machine epsilon is 2–52 = 2.22 ×10–16. Therefore, decimal calculations with 64 bits give 16 digit precision. This is a conservative definition used by industry.
Machine epsilon or machine precision is an upper bound on the relative approximation error due to rounding in floating point arithmetic. This value characterizes computer arithmetic in the field of numerical analysis, and by extension in the subject of computational science.
It's(on my machine):
1.11022302462516E-16
You can easily calculate it:
double machEps = 1.0d;
do {
machEps /= 2.0d;
}
while ((double)(1.0 + machEps) != 1.0);
Console.WriteLine( "Calculated machine epsilon: " + machEps );
Edited:
I calcualted 2 times epsilon, now it should be correct.
The Math.NET library defines a Precision class, which has a DoubleMachineEpsilon property.
You could check how they do it.
According to that it is:
/// <summary>
/// The base number for binary values
/// </summary>
private const int BinaryBaseNumber = 2;
/// <summary>
/// The number of binary digits used to represent the binary number for a double precision floating
/// point value. i.e. there are this many digits used to represent the
/// actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5.
/// </summary>
private const int DoublePrecision = 53;
private static readonly double doubleMachinePrecision = Math.Pow(BinaryBaseNumber, -DoublePrecision);
So it is 1,11022302462516E-16
according to this source.
Just hard-code the value:
const double e1 = 2.2204460492503131e-16;
or use the power of two:
static readonly double e2 = Math.Pow(2, -52);
or use your definition (more or less):
static readonly double e3 = BitConverter.Int64BitsToDouble(BitConverter.DoubleToInt64Bits(1.0) + 1L) - 1.0;
And see Wikipedia: machine epsilon.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With