Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

sizeof long double and precision not matching?

Consider the following C code:

#include <stdio.h>
int main(int argc, char* argv[]) 
{
    const long double ld = 0.12345678901234567890123456789012345L;
    printf("%lu %.36Lf\n", sizeof(ld), ld);
    return 0;
}

Compiled with gcc 4.8.1 under Ubuntu x64 13.04, it prints:

16 0.123456789012345678901321800735590983

Which tells me that a long double weights 16 bytes but the decimals seems to be ok only to the 20th place. How is it possible? 16 bytes corresponds to a quad, and a quad would give me between 33 and 36 decimals.

like image 204
Vincent Avatar asked Jun 29 '13 17:06

Vincent


People also ask

Is long double more precise than double?

Conclusion. The double and long double are two data types used in programming languages such as C++. The main difference between double and long double is that double is used to represent a double precision floating point while long precision is used to represent extended precision floating point value.

How much precision does a long double have?

With the GNU C Compiler, long double is 80-bit extended precision on x86 processors regardless of the physical storage used for the type (which can be either 96 or 128 bits), On some other architectures, long double can be double-double (e.g. on PowerPC) or 128-bit quadruple precision (e.g. on SPARC).

What is the size of float value with double precision?

The XDR standard defines the encoding for the double-precision floating-point data type as a double. The length of a double is 64 bits or 8 bytes.

What has more precision than double?

Big Decimal. NET and Java also have Decimal/BigDecimal class that has higher precision than double. For more accurate calculations like in financial and banking applications, Decimal is used because it further reduces rounding errors.


1 Answers

The long double format in your C implementation uses an Intel format with a one-bit sign, a 15-bit exponent, and a 64-bit significand (ten bytes total). The compiler allocates 16 bytes for it, which is wasteful but useful for some things such as alignment. However, the 64 bits provide only log10(264) digits of significance, which is about 20 digits.

like image 90
Eric Postpischil Avatar answered Oct 14 '22 19:10

Eric Postpischil