When BigDecimal
is used with an input of double
and BigDecimal
with an input of String
different results seem to appear.
BigDecimal a = new BigDecimal(0.333333333);
BigDecimal b = new BigDecimal(0.666666666);
BigDecimal c = new BigDecimal("0.333333333");
BigDecimal d = new BigDecimal("0.666666666");
BigDecimal x = a.multiply(b);
BigDecimal y = c.multiply(d);
System.out.println(x);
System.out.println(y);
x outputs as
0.222222221777777790569747304508155316795087227497352441864147715340493949298661391367204487323760986328125
while y is
0.222222221777777778
Am I wrong in saying that this is because of double imprecision? But since this is a BigDecimal
, shouldn't it be the same?
Am I wrong in saying that this is because of double imprecision?
You are absolutely right, this is exactly because of double
's imprecision.
But since this is a
BigDecimal
, shouldn't it be the same?
No, it shouldn't. The error is introduced the moment you create new BigDecimal(0.333333333)
, because 0.333333333
constant already has an error embedded in it. At that point there is nothing you can do to fix this representation error: the proverbial horse is out of the barn by then, so it's too late to close the doors.
When you pass a String
, on the other hand, the decimal representation matches the string exactly, so you get a different result.
Yes, this is floating point error. The problem is that the literals 0.333333333
and 0.666666666
are represented as doubles before being passed as an argument to BigDecimal
--- notably, BigDecimal
's constructor takes a double
as an argument.
This is supported by the standard, which says that floating point literals default to double
unless otherwise specified.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With