Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why decimals do not interoperate with floats

Tags:

python

decimal

Notes on Decimal says:

## Decimal has all of the methods specified by the Real abc, but it should
## not be registered as a Real because decimals do not interoperate with
## binary floats (i.e.  Decimal('3.14') + 2.71828 is undefined).  But,
## abstract reals are expected to interoperate (i.e. R1 + R2 should be
## expected to work if R1 and R2 are both Reals).

I don't understand why Decimal('3.14') + 2.71828 is undefined. Decimal can be constructed from a float so I think that __add__ can be implemented as follows:

def __add__(self, other):
    if isinstance(other, float):
        return self + Decimal(other)
    ...

With it we will be able to get Decimal('3.14') + 2.71828 = Decimal('3.14') + Decimal(2.71828) = 5.858280000000000153903556566

Can you explain why they do not interoperate in current implementation?

EDIT: it is possible to save precision when converting float to decimal with converting it to str first:

def __add__(self, other):
    if isinstance(other, float):
        return self + Decimal(str(other))
    ...

Thus Decimal('3.14') + 2.71828 = Decimal('3.14') + Decimal('2.71828') = 5.85828 without losing precision.

like image 946
sanyassh Avatar asked Aug 30 '19 12:08

sanyassh


1 Answers

The key is in the way that the Decimal is initialized.

Notice how in all the examples in the documentation, Decimal is made from a string. It allows us to pass a number without loosing precision when storing it.

number = Decimal('1.1')
print(number)

The output of the code above will always be Decimal('1.1')

Altough initalizing Decimal with a float is possible, we lose the very thing we want to achieve by using Decimal in the first place, precision. Consider the following line of code.

number = Decimal(1.1)

What's the values inside the number? In my instance, the output is

Decimal('1.100000000000000088817841970012523233890533447265625')

1.1 is stored in a float, which means it loses precision. Then we use that to initialize a Decimal. That's why you should be initializing Decimals using string, not float, to avoid losing precision, which is the whole point of using Decimal in the first place anyways.

like image 83
Kaldrr Avatar answered Nov 11 '22 18:11

Kaldrr