Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does double in C print fewer decimal digits than C++?

I have this code in C where I've declared 0.1 as double.

#include <stdio.h>  int main() {     double a = 0.1;      printf("a is %0.56f\n", a);     return 0; } 

This is what it prints, a is 0.10000000000000001000000000000000000000000000000000000000

Same code in C++,

#include <iostream> using namespace std; int main() {     double a = 0.1;      printf("a is %0.56f\n", a);     return 0; } 

This is what it prints, a is 0.1000000000000000055511151231257827021181583404541015625

What is the difference? When I read both are alloted 8 bytes? How does C++ print more numbers in the decimal places?

Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?

like image 725
Raghavendra Gujar Avatar asked Oct 05 '18 07:10

Raghavendra Gujar


People also ask

How many decimal places can a double hold C?

double has 15 decimal digits of precision.

What is more precise than double in C?

In C and related programming languages, long double refers to a floating-point data type that is often more precise than double precision though the language standard only requires it to be at least as precise as double .

Is double better than decimal?

Use double for non-integer math where the most precise answer isn't necessary. Use decimal for non-integer math where precision is needed (e.g. money and currency). Use int by default for any integer-based operations that can use that type, as it will be more performant than short or long .

Can double hold decimal values?

Difference in Precision (Accuracy) float and double both have varying capacities when it comes to the number of decimal digits they can hold. float can hold up to 7 decimal digits accurately while double can hold up to 15.


2 Answers

With MinGW g++ (and gcc) 7.3.0 your results are reproduced exactly.

This is a pretty weird case of Undefined Behavior.

The Undefined Behavior is due to using printf without including an appropriate header, ¹violating the “shall” in

C++17 §20.5.2.2

A translation unit shall include a header only outside of any declaration or definition, and shall include the header lexically before the first reference in that translation unit to any of the entities declared in that header. No diagnostic is required.

In the C++ code change <iostream> to <stdio.h>, to get valid C++ code, and you get the same result as with the C program.


Why does the C++ code even compile?

Well, unlike C, in C++ a standard library header is allowed to drag in any other header. And evidently with g++ the <iostream> header drags in some declaration of printf. Just not an entirely correct one.

Details: With MinGW g++ 7.3.0 the declaration/definition of printf depends on the macro symbol __USE_MINGW_ANSI_STDIO. The default is just that <stdio.h> declares printf. But when __USE_MINGW_ANSI_STDIO is defined as logical true, <stdio.h> provides an overriding definition of printf, that calls __mingw_vprintf. And as it happens the <cstdio> header defines (via an indirect include) __USE_MINGW_ANSI_STDIO before including <stdio.h>.

There is a comment in <_mingw.h>, "Note that we enable it also for _GNU_SOURCE in C++, but not for C case.".

In C++, with relevant versions of this compiler, there is effectively a difference between including <stdio.h> and using printf, or including <cstdio>, saying using std::printf;, and using printf.


Regarding

Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?

... it's just the decimal presentation that's longer. The digits beyond the precision of the internal representation, about 15 digits for 64-bit IEEE 754, are essentially garbage, but they can be used to reconstitute the original bits exactly. At some point they will become all zeroes, and that point is reached for the last digit in your C++ program output.


1Thanks to Dietrich Epp for finding that standards quote.

like image 168
Cheers and hth. - Alf Avatar answered Sep 29 '22 10:09

Cheers and hth. - Alf


It looks to me like both cases print 56 decimal digits, so the question is technically based on a flawed premise.

I also see that both numbers are equal to 0.1 within 52 bits of precision, so both are correct.

That leads to your final quesion, "How come its decimal interpretation stores more?". It doesn't store more decimals. double doesn't store any decimals. It stores bits. The decimals are generated.

like image 22
MSalters Avatar answered Sep 29 '22 10:09

MSalters