Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Double precision - decimal places

From what I have read, a value of data type double has an approximate precision of 15 decimal places. However, when I use a number whose decimal representation repeats, such as 1.0/7.0, I find that the variable holds the value of 0.14285714285714285 - which is 17 places (via the debugger).

I would like to know why it is represented as 17 places internally, and why a precision of 15 is always written at ~15?

like image 208
nf313743 Avatar asked Apr 03 '12 18:04

nf313743


People also ask

How many decimal places is double-precision?

Double precision numbers are accurate up to sixteen decimal places but after calculations have been done there may be some rounding errors to account for.

Can a double have decimal places?

double has 15 decimal digits of precision.

What is the precision in decimal places?

Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number. For example, the number 123.45 has a precision of 5 and a scale of 2.

How do you write double-precision?

A double-precision exponent consists of the letter D , followed by an optional plus or minus sign, followed by an integer. A double-precision exponent denotes a power of 10. The value of a double-precision constant is the product of that power of 10 and the constant that precedes the D .


1 Answers

An IEEE double has 53 significant bits (that's the value of DBL_MANT_DIG in <cfloat>). That's approximately 15.95 decimal digits (log10(253)); the implementation sets DBL_DIG to 15, not 16, because it has to round down. So you have nearly an extra decimal digit of precision (beyond what's implied by DBL_DIG==15) because of that.

The nextafter() function computes the nearest representable number to a given number; it can be used to show just how precise a given number is.

This program:

#include <cstdio> #include <cfloat> #include <cmath>  int main() {     double x = 1.0/7.0;     printf("FLT_RADIX = %d\n", FLT_RADIX);     printf("DBL_DIG = %d\n", DBL_DIG);     printf("DBL_MANT_DIG = %d\n", DBL_MANT_DIG);     printf("%.17g\n%.17g\n%.17g\n", nextafter(x, 0.0), x, nextafter(x, 1.0)); } 

gives me this output on my system:

FLT_RADIX = 2 DBL_DIG = 15 DBL_MANT_DIG = 53 0.14285714285714282 0.14285714285714285 0.14285714285714288 

(You can replace %.17g by, say, %.64g to see more digits, none of which are significant.)

As you can see, the last displayed decimal digit changes by 3 with each consecutive value. The fact that the last displayed digit of 1.0/7.0 (5) happens to match the mathematical value is largely coincidental; it was a lucky guess. And the correct rounded digit is 6, not 5. Replacing 1.0/7.0 by 1.0/3.0 gives this output:

FLT_RADIX = 2 DBL_DIG = 15 DBL_MANT_DIG = 53 0.33333333333333326 0.33333333333333331 0.33333333333333337 

which shows about 16 decimal digits of precision, as you'd expect.

like image 108
Keith Thompson Avatar answered Sep 17 '22 13:09

Keith Thompson