Direct conversion from float to Decimal was implemented in python-2.7, both in Decimal's constructor and with the Decimal.from_float() classmethod.
Python-2.6 instead throws a TypeError suggesting to convert to string first:
TypeError: Cannot convert float to Decimal. First convert the float to a string
so my usual workaround is this:
if sys.version_info < (2, 7):
Decimal.from_float = classmethod(lambda cls, x: cls(str(x)))
That's just a literary translation from the error message - and I just don't bother implementing it in the constructor too.
If it's that simple, why didn't they implement it in the first place instead of telling the user to do it in a TypeError? Is that the best method available (and by extension is that the one used in python-2.7 and newer?)
To convert a float value to int we make use of the built-in int() function, this function trims the values after the decimal point and returns only the integer/whole number part.
Use the round() function to print a float to 6 decimal places, e.g. print(round(my_float, 6)) . The round() function will round the floating-point number to 6 decimal places and will return the result.
In Python, you can simply use the bin() function to convert from a decimal value to its corresponding binary value. And similarly, the int() function to convert a binary to its decimal value. The int() function takes as second argument the base of the number to be converted, which is 2 in case of binary numbers.
Integers and floating-point numbers can be mixed in arithmetic. Python 3 automatically converts integers to floats as needed.
Your workaround is not the RightWayToDoIt(tm) because it loses information. The lossless way to convert is shown in the recipe for float_to_decimal()
shown in the Decimal FAQ.
The reason we didn't include Decimal.from_float in Python 2.6 is because we were being conservative about introducing unintentional interactions between binary floats and decimal floats. By Python 2.7, this was all worked out and you can just write Decimal(f)
where f is a binary float.
Other than the small nuisance in 2.6, I hope you're enjoying the Decimal module
Probably because the behavior of a direct conversion can be counterintuitive if you don't know a few implementation details about floats. As stated in the docs:
Note
Decimal.from_float(0.1)
is not the same asDecimal('0.1')
. Since0.1
is not exactly representable in binary floating point, the value is stored as the nearest representable value which is0x1.999999999999ap-4
. That equivalent value in decimal is0.1000000000000000055511151231257827021181583404541015625
.
If you convert to a string, you can control the precision you want to use, so you can get an accurate conversion to Decimal
.
The new method was introduced in Python 2.7 - that's why it isn't in 2.6. New features are not backported to older versions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With