I'm trying to solve some floating-point problems in my code in Python 2.7.10. When testing I've encountered a strange behaviour with the format
method:
print "{}".format(0.3000000000004) # 13 decimals
Prints: 0.3
But:
print "{}".format(0.300000000004) # 12 decimals
Prints: 0.300000000004
Since I'm not specifying the format, why does it round the first number? Is there a default number of allowed decimal places?
Since you do not specify a format, the default type coercion to string is used. So this isn't really an issue with format
. Python 2 truncates to a precision of 12 characters (excluding the leading zero, if any) for float.__str__
, and after truncation in this case, all trailing zeros are cleaned up:
>>> str(0.3000000000004) # unlike str(0.3000000000014) -> '0.300000000001'
'0.3'
Adding the format_spec
:f
gives you the default precision of 6:
>>> '{:f}'.format(0.3000000000004)
'0.300000'
Specify the width and precision or use repr
to get a good representation:
>>> '{!r}'.format(0.3000000000004)
'0.3000000000004'
The behavior is different in Python 3 though:
>>> str(0.3000000000004)
'0.3000000000004'
The formatting of floats is handled by just one function float_repr
in Python 3 (it does not have a special float_str
function):
(reprfunc)float_repr, /* tp_repr */
...
(reprfunc)float_repr, /* tp_str */
whereas Python2.7 defines a separate handler float_str
and float_repr
for __str__
and __repr__
respectively:
(reprfunc)float_repr, /* tp_repr */
...
(reprfunc)float_str, /* tp_str */
The deciding variable here which I think is the reason for the precision loss beyond 12d.p. is PyFloat_STR_PRECISION
(defined in Python 2):
#define PyFloat_STR_PRECISION 12
It reverses to a truncation for floats taking more than 12 characters in the default conversion.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With