Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I check the default decimal precision when converting float to str?

Tags:

When converting a float to a str, I can specify the number of decimal points I want to display

'%.6f' % 0.1
> '0.100000'
'%.6f' % .12345678901234567890
> '0.123457'

But when simply calling str on a float in python 2.7, it seems to default to 12 decimal points max

str(0.1)
>'0.1'
str(.12345678901234567890)
>'0.123456789012'

Where is this max # of decimal points defined/documented? Can I programmatically get this number?

like image 320
C_Z_ Avatar asked Aug 07 '18 13:08

C_Z_


2 Answers

The number of decimals displayed is going to vary greatly, and there won't be a way to predict how many will be displayed in pure Python. Some libraries like numpy allow you to set precision of output.

This is simply because of the limitations of float representation.

The relevant parts of the link talk about how Python chooses to display floats.

Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine

Python keeps the number of digits manageable by displaying a rounded value instead

Now, there is the possibility of overlap here:

Interestingly, there are many different decimal numbers that share the same nearest approximate binary fraction

The method for choosing which decimal values to display was changed in Python 3.1 (But the last sentence implies this might be an implementation detail).

For example, the numbers 0.1 and 0.10000000000000001 are both approximated by 3602879701896397 / 2 ** 55. Since all of these decimal values share the same approximation, any one of them could be displayed while still preserving the invariant eval(repr(x)) == x

Historically, the Python prompt and built-in repr() function would choose the one with 17 significant digits, 0.10000000000000001. Starting with Python 3.1, Python (on most systems) is now able to choose the shortest of these and simply display 0.1.

like image 124
user3483203 Avatar answered Oct 05 '22 14:10

user3483203


I do not believe this exists in the python language spec. However, the cpython implementation does specify it. The float_repr() function, which turns a float into a string, eventually calls a helper function with the 'r' formatter, which eventually calls a utility function that hardcodes the format to what comes down to format(float, '.16g'). That code can be seen here. Note that this is for python3.6.

>>> import math
>>> str(math.pi*4)
12.5663706144

giving the maximum number of signification digits (both before and after the decimal) at 16. It appears that in the python2.7 implementation, this value was hardcoded to .12g. As for why this happened (and is somewhat lacking documentation, can be found here.)

So if you are trying to get how long a number will be formatted when printed, simply get it's length with .12g.

def len_when_displayed(n):
     return len(format(n, '.12g'))
like image 38
modesitt Avatar answered Oct 05 '22 14:10

modesitt