I have a system that is performing lots of calculations using decimals, occasionally it will add up the same numbers, but return different results, +/- 0.000000000000000000000000001
Here is a short example:
decimal a = 2.016879990455473621256359079m;
decimal b = 0.8401819425625631128956517177m;
decimal c = 0.4507062854741283043456903406m;
decimal d = 6.7922317815078349615022988627m;
decimal result1 = a + b + c + d;
decimal result2 = a + d + c + b;
Console.WriteLine((result1 == result2) ? "Same" : "DIFFERENT");
Console.WriteLine(result1);
Console.WriteLine(result2);
That outputs:
DIFFERENT
10.100000000000000000000000000
10.100000000000000000000000001
The differences are so small that there is no practical effect, but has anyone seen something like this before? I expected that when adding up the same numbers you would always get the same results.
The entire field of Numerical analysis is devoted to studying these kind of effects and how to avoid them.
To produce the best result when summing a list of floating point numbers, first sort the list from smallest to largest, and add them up in that order.
You might suspect a decimal
type to be immune to the bane of double
-users everywhere.
But because decimal
has 28-29 digits of precision and your input is asking for the sum of 29 digits of precision of data, you're right at the very edge of what your data type can accurately represent.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With