Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Comparing Python Decimals created from float and string

Tags:

python

decimal

Can someone explain why the following three examples are not all equal?

ipdb> Decimal(71.60) == Decimal(71.60)
True
ipdb> Decimal('71.60') == Decimal('71.60')
True
ipdb> Decimal(71.60) == Decimal('71.60')
False

Is there a general 'correct' way to create Decimal objects in Python? (ie, as strings or as floats)

like image 244
mafrosis Avatar asked Aug 17 '15 14:08

mafrosis


2 Answers

Floating point numbers, what are used by default, are in base 2. 71.6 can't be accurately represented in base 2. (Think of numbers like 1/3 in base 10).

Because of this, they will be converted to be as many decimal places as the floating point can represent. Because the number 71.6 in base 2 would go on forever and you almost certainly don't have infinate memory to play with, the computer decides to represent it (well, is told to) in a fewer number of bits.

If you were to use a string instead, the program can use an algorithm to convert it exactly instead of starting from the dodgy rounded floating point number.

>>> decimal.Decimal(71.6)
Decimal('71.599999999999994315658113919198513031005859375')

Compared to

>>> decimal.Decimal("71.6")
Decimal('71.6')

However, if your number is representable exactly as a float, it is just as accurate as a string

>>> decimal.Decimal(71.5)
Decimal('71.5')
like image 181
muddyfish Avatar answered Sep 25 '22 10:09

muddyfish


Normally Decimal is used to avoid the floating point precision problem. For example, the float literal 71.60 isn't mathematically 71.60, but a number very close to it.

As a result, using float to initialize Decimal won't avoid the problem. In general, you should use strings to initialize Decimal.

like image 43
Yu Hao Avatar answered Sep 24 '22 10:09

Yu Hao