The rounding error is not random and the way it is implemented it attempts to minimise the error. This means that sometimes the error is not visible, or there is not error.
For example 0.1
is not exactly 0.1
i.e. new BigDecimal("0.1") < new BigDecimal(0.1)
but 0.5
is exactly 1.0/2
This program shows you the true values involved.
BigDecimal _0_1 = new BigDecimal(0.1);
BigDecimal x = _0_1;
for(int i = 1; i <= 10; i ++) {
System.out.println(i+" x 0.1 is "+x+", as double "+x.doubleValue());
x = x.add(_0_1);
}
prints
0.1000000000000000055511151231257827021181583404541015625, as double 0.1
0.2000000000000000111022302462515654042363166809082031250, as double 0.2
0.3000000000000000166533453693773481063544750213623046875, as double 0.30000000000000004
0.4000000000000000222044604925031308084726333618164062500, as double 0.4
0.5000000000000000277555756156289135105907917022705078125, as double 0.5
0.6000000000000000333066907387546962127089500427246093750, as double 0.6000000000000001
0.7000000000000000388578058618804789148271083831787109375, as double 0.7000000000000001
0.8000000000000000444089209850062616169452667236328125000, as double 0.8
0.9000000000000000499600361081320443190634250640869140625, as double 0.9
1.0000000000000000555111512312578270211815834045410156250, as double 1.0
Note: that 0.3
is slightly off, but when you get to 0.4
the bits have to shift down one to fit into the 53-bit limit and the error is discarded. Again, an error creeps back in for 0.6
and 0.7
but for 0.8
to 1.0
the error is discarded.
Adding it 5 times should cumulate the error, not cancel it.
The reason there is an error is due to limited precision. i.e 53-bits. This means that as the number uses more bits as it get larger, bits have to be dropped off the end. This causes rounding which in this case is in your favour.
You can get the opposite effect when getting a smaller number e.g. 0.1-0.0999
=> 1.0000000000000286E-4
and you see more error than before.
An example of this is why in Java 6 Why does Math.round(0.49999999999999994) return 1 In this case the loss of a bit in calculation results in a big difference to the answer.
Barring overflow, in floating-point, x + x + x
is exactly the correctly rounded (i.e. nearest) floating-point number to the real 3*x
, x + x + x + x
is exactly 4*x
, and x + x + x + x + x
is again the correctly rounded floating-point approximation for 5*x
.
The first result, for x + x + x
, derives from the fact that x + x
is exact. x + x + x
is thus the result of only one rounding.
The second result is more difficult, one demonstration of it is discussed here (and Stephen Canon alludes to another proof by case analysis on the last 3 digits of x
). To summarize, either 3*x
is in the same binade as 2*x
or it is in the same binade as 4*x
, and in each case it is possible to deduce that the error on the third addition cancels the error on the second addition (the first addition being exact, as we already said).
The third result, “x + x + x + x + x
is correctly rounded”, derives from the second in the same way that the first derives from the exactness of x + x
.
The second result explains why 0.1 + 0.1 + 0.1 + 0.1
is exactly the floating-point number 0.4
: the rational numbers 1/10 and 4/10 get approximated the same way, with the same relative error, when converted to floating-point. These floating-point numbers have a ratio of exactly 4 between them. The first and third result show that 0.1 + 0.1 + 0.1
and 0.1 + 0.1 + 0.1 + 0.1 + 0.1
can be expected to have less error than might be inferred by naive error analysis, but, in themselves, they only relate the results to respectively 3 * 0.1
and 5 * 0.1
, which can be expected to be close but not necessarily identical to 0.3
and 0.5
.
If you keep adding 0.1
after the fourth addition, you will finally observe rounding errors that make “0.1
added to itself n times” diverge from n * 0.1
, and diverge even more from n/10. If you were to plot the values of “0.1 added to itself n times” as a function of n, you would observe lines of constant slope by binades (as soon as the result of the nth addition is destined to fall into a particular binade, the properties of the addition can be expected to be similar to previous additions that produced a result in the same binade). Within a same binade, the error will either grow or shrink. If you were to look at the sequence of the slopes from binade to binade, you would recognize the repeating digits of 0.1
in binary for a while. After that, absorption would start to take place and the curve would go flat.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With