I am trying to determine what the maximum precision for a double is. In the comments for the accepted answer in this link Retain precision with double in Java @PeterLawrey states max precision in 15.
How do you determine this ?
@PeterLawrey states max precision in 15.
That's actually not what he stated at all. What he stated was:
double has 15 decimal places of accuracy
and he is wrong. They have 15 decimal digits of accuracy.
The number of decimal digits in any number is given by its log to the base 10. 15 is the floor value of log10(253-1), where 53 is the number of bits of mantissa (including the implied bit), as described in the Javadoc and IEEE 754, and 253-1 is therefore the maximum possible mantissa value. The actual value is 15.954589770191003298111788092734 to the limits of the Windows calculator.
He is quite wrong to describe it as 'decimal places of accuracy'. A double
has 15 decimal digits of accuracy if they are all before the decimal point. For numbers with fractional parts you can get many more than 15 digits in the decimal representation, because of the incommensurability of decimal and binary fractions.
Run this code, and see where it stops
public class FindPrecisionDouble {
static public void main(String[] args) {
double x = 1.0;
double y = 0.5;
double epsilon = 0;
int nb_iter = 0;
while ((nb_iter < 1000) && (x != y)) {
System.out.println(x-y);
epsilon = Math.abs(x-y);
y = ( x + y ) * 0.5;
}
final double prec_decimal = - Math.log(epsilon) / Math.log(10.0);
final double prec_binary = - Math.log(epsilon) / Math.log(2.0);
System.out.print("On this machine, for the 'double' type, ");
System.out.print("epsilon = " );
System.out.println( epsilon );
System.out.print("The decimal precision is " );
System.out.print( prec_decimal );
System.out.println(" digits" );
System.out.print("The binary precision is " );
System.out.print( prec_binary );
System.out.println(" bits" );
}
}
Variable y
becomes the smallest value different than 1.0
. On my computer (Mac Intel Core i5), it stops at 1.1102...E-16
. It then prints the precision (in decimal and in binary).
As stated in https://en.wikipedia.org/wiki/Machine_epsilon, floating-point precision can be estimated with the epsilon value. It is "the smallest number that, when added to one, yields a result different from one" (I did a small variation: 1-e instead of 1+e, but the logic is the same)
I'll explain in decimal: if you have a 4-decimals precision, you can express 1.0000 - 0.0001, but you cannot express the number 1.00000-0.00001 (you lack the 5th decimal). In this example, with a 4-decimals precision, the epsilon is 0.0001. The epsilon directly measures the floating-point precision. Just transpose to binary.
Edit Your question asked "How to determine...". The answer you were searching were more an explanation of than a way to determine precision (with the answer you accepted). Anyways, for other people, running this code on a machine will determine the precision for the "double" type.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With