Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python: Decimal addition and subtraction not giving exact result

Python (3.8) code:

#!/usr/bin/env python3

from decimal import Decimal
from decimal import getcontext

x = Decimal('0.6666666666666666666666666667')
y = x;
print(getcontext().prec)
print(y)
print(y == x)
y += x; y += x; y += x;
y -= x; y -= x; y -= x;
print(y)
print(y == x)

Python output:

28
0.6666666666666666666666666667
True
0.6666666666666666666666666663
False

Java code:

import java.math.BigDecimal;

public class A
{
    public static void main(String[] args)
    {
        BigDecimal x = new BigDecimal("0.6666666666666666666666666667");
        BigDecimal y = new BigDecimal("0.6666666666666666666666666667");

        System.out.println(x.precision());
        System.out.println(y.precision());

        System.out.println(y);
        System.out.println(y.equals(x));

        y = y.add(x); y = y.add(x); y = y.add(x);
        y = y.subtract(x); y = y.subtract(x); y = y.subtract(x);

        System.out.println(y);
        System.out.println(y.equals(x));
    }
}

Java output:

28
28
0.6666666666666666666666666667
true
0.6666666666666666666666666667
true

What would be the way to achieve arbitrary precision in Python? By setting a very large prec?

like image 312
Cyker Avatar asked Dec 30 '22 18:12

Cyker


2 Answers

From Python documentation:

The decimal module incorporates a notion of significant places so that 1.30 + 1.20 is 2.50.

Moreover, the following also need to be considered:

The context precision does not affect how many digits are stored. That is determined exclusively by the number of digits in value. For example, Decimal('3.00000') records all five zeros even if the context precision is only three.

Context precision and rounding only come into play during arithmetic operations.

Therefore:

import decimal
from decimal import Decimal

decimal.getcontext().prec = 4

a = Decimal('1.22222')

#1.22222 
#what you put in is what you get even though the prec was set to 4
print(a)        

b = Decimal('0.22222')

#0.22222
#Same reasoning as above
print(b)

a += 0; b += 0

#a will be printed as 1.222 (4 significant figures)
#b will be printed as 0.2222 (Leading zeroes are not significant!)
print('\n', a, '\n', b, sep='')
like image 192
Henry Tjhia Avatar answered Jan 14 '23 03:01

Henry Tjhia


From the Decimal docs:

The use of decimal floating point eliminates decimal representation error (making it possible to represent 0.1 exactly); however, some operations can still incur round-off error when non-zero digits exceed the fixed precision.

The effects of round-off error can be amplified by the addition or subtraction of nearly offsetting quantities resulting in loss of significance. Knuth provides two instructive examples where rounded floating point arithmetic with insufficient precision causes the breakdown of the associative and distributive properties of addition:

# Examples from Seminumerical Algorithms, Section 4.2.2.
>>> from decimal import Decimal, getcontext
>>> getcontext().prec = 8

>>> u, v, w = Decimal(11111113), Decimal(-11111111), Decimal('7.51111111')
>>> (u + v) + w 
Decimal('9.5111111')
>>> u + (v + w) 
Decimal('10')

>>> u, v, w = Decimal(20000), Decimal(-6), Decimal('6.0000003')
>>> (u*v) + (u*w) 
Decimal('0.01')
>>> u * (v+w) 
Decimal('0.0060000') 
like image 42
Roman Ferenets Avatar answered Jan 14 '23 03:01

Roman Ferenets