Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Are C doubles different to .NET doubles?

Comparing some C-code and the F# I'm trying to replace it with, I observed that there were some differences in the final result.

Working back up the code, I discovered that even at there were differences - albeit tiny ones.

The code starts by reading in data from a file. and the very first number comes out differently. For instance, in F# (easier to script):

let a = 71.9497985840
printfn "%.20f" a

I get the expected (to me) output 71.94979858400000000000.

But in C:

a =  71.9497985840;
fprintf (stderr, "%.20f\n", a);

prints out 71.94979858400000700000.

Where does that 7 come from?

The difference is only tiny, but it bothers me because I don't know why. (It also bothers me because it makes it more difficult to track down where my two versions of code are diverging)

like image 708
Benjol Avatar asked Jun 05 '12 12:06

Benjol


People also ask

What is difference between double and decimal in C#?

Double (aka double): A 64-bit floating-point number. Decimal (aka decimal): A 128-bit floating-point number with a higher precision and a smaller range than Single or Double.

What is difference between float and double in C#?

Use float or double ? The precision of a floating point value indicates how many digits the value can have after the decimal point. The precision of float is only six or seven decimal digits, while double variables have a precision of about 15 digits. Therefore it is safer to use double for most calculations.

What is double in C#?

A double type variable is a 64-bit floating data type C, C++, C# and many other programming languages recognize the double as a type. A double type can represent fractional as well as whole values. It can contain up to 15 digits in total, including those before and after the decimal point.

How accurate is double in C?

Master C and Embedded C Programming- Learn as you go In terms of number of precision it can be stated as double has 64 bit precision for floating point number (1 bit for the sign, 11 bits for the exponent, and 52* bits for the value), i.e. double has 15 decimal digits of precision.


2 Answers

It's a diifference in printing. Converting that value to an IEEE754 double yields

Prelude Text.FShow.RealFloat> FD 71.9497985840
71.94979858400000694018672220408916473388671875

but the representation 71.949798584 is sufficient to distinguish the number from its neighbours. C, when asked to print with a precision of 20 digits after the decimal point converts the value correctly rounded to the desired number of digits, apparently F# uses the shortest uniquely determining representation and pads it with the desired number of 0s, just like Haskell does.

like image 165
Daniel Fischer Avatar answered Oct 29 '22 20:10

Daniel Fischer


It's just different rounding. The numbers are the same (according to CPython, at least):

>>> '%.44f' % 71.94979858400000000000
'71.94979858400000694018672220408916473388671875'
>>> '%.44f' % 71.94979858400000700000
'71.94979858400000694018672220408916473388671875'
like image 45
dan04 Avatar answered Oct 29 '22 20:10

dan04