Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the difference between DECIMAL_DIG and LDBL_DIG in <float.h>

The macro constant DECIMAL_DIG is the

number of decimal digits that can be converted to long double and back without losing precision.

The macro constant LDBL_DIG is the

number of decimal digits that can be represented without losing precision for long double.

What is the difference between these two definitions? Is there a case where using one over the other could lead to incorrect results?

On my machine, DECIMAL_DIG == 21, while LDBL_DIG == 18.

Source: 1

like image 440
Andrew McKinlay Avatar asked Sep 26 '16 13:09

Andrew McKinlay


1 Answers

[Edit Oct 2021]
Next versions of C (C23) may "make DECIMAL_DIG obsolescent".
I recommend you consider alternatives.


What is the difference between DECIMAL_DIG and LDBL_DIG (?)

DECIMAL_DIG concerns widest floating point type to decimal text to widest floating point type conversions.
LDBL_DIG concerns decimal text to long double to decimal text conversions.


First: Narrow the problem

DECIMAL_DIG (available since C99) applies to the widest floating point type. With C11, 3 type specific macros FLT_DECIMAL_DIG, DBL_DECIMAL_DIG, LDBL_DECIMAL_DIG mean the same thing except they apply to the corresponding type, rather than the widest one.

To simplify the problem, let us compare LDBL_DECIMAL_DIG to LDBL_DIG as they both deal with the same type: long double.


decimal text representation --> long double --> decimal text representation.
LDBL_DIG is the maximum significant digits of text that in this round-trip always result in the same starting value.

long double --> decimal text representation --> long double.
LDBL_DECIMAL_DIG is the number of significant digits of text needed in this round-trip to always result in the same starting long double value.

If the floating point type used a base 10 presentation, LDBL_DIG and LDBL_DECIMAL_DIG would have the same value. Yet most C implementations use a binary base 2 instead of 10: FLT_RADIX == 2.


The follows avoids a deep mathematical technical explanation.

long double can not represent every possible value that decimal text representation does. The latter can be s = "0.1234567890123456789012345678901234567890" and common long double can not represent that exactly. Converting s into long double and back to text is not expected to return the same result.

char *s = "0.1234567890123456789012345678901234567890";
long double ld = strtold(s, (char **)NULL);
printf("%.40Le\n", ld);
// typical output        v -- different
// 1.2345678901234567890132180073559098332225e-01

If we limit text input to LDBL_DIG significant digits though, code will always succeed for all values of long double - round trip successfully.

s = "0.123456789012345678";
ld = strtold(s, (char **)NULL);
printf("%d\n%.*Le\n", LDBL_DIG, LDBL_DIG - 1, ld);
// 18
// 1.23456789012345678e-01

This post Printf width specifier to maintain precision of floating-point value details the use of xxx_DECIMAL_DIG family of macros. It shows the number of significant digits need to print a floating-point value to text and then convert back to a FP value and always get the same result.


Note: xxx_DECIMAL_DIG >= xxx_DIG.

LDBL_DIG - 1 used above rather than LDBL_DIG as %.*Le prints a leading digit and then the specified precision number of digits. The total significant digit count should be LDBL_DIG.



Further info to answer Are the definitions I quoted wrong or not?

First definition is close, yet not complete.
LDBL_DIG refers to text --> long double --> text needs.

LDBL_DIG

OP's: "number of decimal digits that can be represented without losing precision for long double."

C Spec: "number of decimal digits, q, such that any floating-point number with q decimal digits can be rounded into a floating-point number with p radix b digits and back again without change to the q decimal digits,"

q = floor((p-1)*log10b)

With OP's machine, long double has p == 64 and b == 2 --> q == 18

Thus a decimal number with up to 18 significant digits, as text, can be converted to a long double and then back to an 18 digit number, in text and always get the starting text value - for the normal long double range.


DECIMAL_DIG

Second definition is amiss.
DECIMAL_DIG refers to long double --> text --> long double needs.
OP's definition speaks of text to long double to text.

OP's: "number of decimal digits that can be converted to long double and back without losing precision."

C Spec: "number of decimal digits, n, such that any floating-point number in the widest supported floating type with pmax radix b digits can be rounded to a floating-point number with n decimal digits and back again without change to the value,"

n = ceil(1 + pmax*log10b)

With OP's machine, has p == 64 and b == 2 --> n == 21

Thus long doubles need to be converted to a decimal numbers with at least 21 significant digits, as text, to be convert back to the same long double - for the normal long double range.

like image 134
chux - Reinstate Monica Avatar answered Oct 13 '22 13:10

chux - Reinstate Monica