Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

printing float, preserving precision

I am writing a program that prints floating point literals to be used inside another program.

How many digits do I need to print in order to preserve the precision of the original float?

Since a float has 24 * (log(2) / log(10)) = 7.2247199 decimal digits of precision, my initial thought was that printing 8 digits should be enough. But if I'm unlucky, those 0.2247199 get distributed to the left and to the right of the 7 significant digits, so I should probably print 9 decimal digits.

Is my analysis correct? Is 9 decimal digits enough for all cases? Like printf("%.9g", x);?

Is there a standard function that converts a float to a string with the minimum number of decimal digits required for that value, in the cases where 7 or 8 are enough, so I don't print unnecessary digits?

Note: I cannot use hexadecimal floating point literals, because standard C++ does not support them.

like image 634
fredoverflow Avatar asked Jun 05 '12 10:06

fredoverflow


People also ask

What precision is float?

The data type float has 24 bits of precision. This is equivalent to only about 7 decimal places. (The rest of the 32 bits are used for the sign and size of the number.) The number of places of precision for float is the same no matter what the size of the number.

How do you print a full float in Python?

To print float values with two decimal places in Python, use the str. format() with “{:. 2f}” as str.

What is the precision specifier number of significant digits?

When using %g, the precision determines the number of significant digits. The default precision is 6.


2 Answers

In order to guarantee that a binary->decimal->binary roundtrip recovers the original binary value, IEEE 754 requires


The original binary value will be preserved by converting to decimal and back again using:[10]

    5 decimal digits for binary16
    9 decimal digits for binary32
    17 decimal digits for binary64
    36 decimal digits for binary128

For other binary formats the required number of decimal digits is

    1 + ceiling(p*log10(2)) 

where p is the number of significant bits in the binary format, e.g. 24 bits for binary32.

In C, the functions you can use for these conversions are snprintf() and strtof/strtod/strtold().

Of course, in some cases even more digits can be useful (no, they are not always "noise", depending on the implementation of the decimal conversion routines such as snprintf() ). Consider e.g. printing dyadic fractions.

like image 69
janneb Avatar answered Oct 21 '22 15:10

janneb


24 * (log(2) / log(10)) = 7.2247199

That's pretty representative for the problem. It makes no sense whatsoever to express the number of significant digits with an accuracy of 0.0000001 digits. You are converting numbers to text for the benefit of a human, not a machine. A human couldn't care less, and would much prefer, if you wrote

24 * (log(2) / log(10)) = 7

Trying to display 8 significant digits just generates random noise digits. With non-zero odds that 7 is already too much because floating point error accumulates in calculations. Above all, print numbers using a reasonable unit of measure. People are interested in millimeters, grams, pounds, inches, etcetera. No architect will care about the size of a window expressed more accurately than 1 mm. No window manufacturing plant will promise a window sized as accurate as that.

Last but not least, you cannot ignore the accuracy of the numbers you feed into your program. Measuring the speed of an unladen European swallow down to 7 digits is not possible. It is roughly 11 meters per second, 2 digits at best. So performing calculations on that speed and printing a result that has more significant digits produces nonsensical results that promise accuracy that isn't there.

like image 35
Hans Passant Avatar answered Oct 21 '22 14:10

Hans Passant