Python Decimal doesn't support being constructed from float; it expects that you have to convert float to a string first.
This is very inconvenient since standard string formatters for float require that you specify number of decimal places rather than significant places. So if you have a number that could have as many as 15 decimal places you need to format as Decimal("%.15f" % my_float)
, which will give you garbage at the 15th decimal place if you also have any significant digits before decimal (Decimal("%.15f" % 100000.3) == Decimal('100000.300000000002910')
).
Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered, perhaps limiting number of significant digits that can be supported?
In Python, to print 2 decimal places we will use str. format() with “{:. 2f}” as string and float as a number. Call print and it will print the float with 2 decimal places.
The rules for converting a floating point number into decimal are simply to reverse of the decimal to floating point conversion: If the original number is in hex, convert it to binary. Separate into the sign, exponent, and mantissa fields. Extract the mantissa from the mantissa field, and restore the leading one.
There is two methods: float_number = float ( decimal_number ) float_number = decimal_number * 1.0.
"%.15g" % f
Or in Python 3.0:
format(f, ".15g")
Just pass the float to Decimal
constructor directly, like this:
from decimal import Decimal Decimal(f)
You said in your question:
Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered
But every time the user enters a value, it is entered as a string, not as a float. You are converting it to a float somewhere. Convert it to a Decimal directly instead and no precision will be lost.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With