When division results in an infinitely repeating number, the number obviously gets truncated to fit into the size of the decimal. So something like 1/3 becomes something like 0.3333333333333333333. If we then multiply that number by 3, we get something like 0.999999999999999999 rather than 1 as we would get if the true value of the fraction had been preserved.
This is a code sample of this from the MSDN article on decimal:
decimal dividend = Decimal.One;
decimal divisor = 3;
// The following displays 0.9999999999999999999999999999 to the console
Console.WriteLine(dividend/divisor * divisor);
This causes an issue when the value 0.9999999999999999999 is compared with 1 for equality. Without the loss of precision they would be equal, but of course in this case the comparison would result in false.
How do people typically deal with this problem? Is there a more elegant solution other than defining some margin of error to every comparison?
A repeating decimal, also referred to as a recurring decimal, is a decimal number with a digit, or group of digits, that repeat on and on, without end; in other words, the digits are periodic. The repeating digits also cannot all be zero; 1.000000 is not a repeating decimal even though we can add an infinite number of 0s after the decimal point.
The infinitely repeated digit sequence is called the repetend or reptend. If the repetend is a zero, this decimal representation is called a terminating decimal rather than a repeating decimal, since the zeros can be omitted and the decimal terminates before these zeros.
Every infinite repeating decimal can be expressed as a fraction. Find the fraction represented by the repeating decimal . Let n stand for or 0.77777 …. So 10 n stands for or 7.77777 …. 10 n and n have the same fractional part, so their difference is an integer. You can solve this problem as follows.
When division results in an infinitely repeating number, the number obviously gets truncated to fit into the size of the decimal. So something like 1/3 becomes something like 0.3333333333333333333.
As you have mentioned, result of calculation with the usage of floating-point numbers have to be 'fitted' into floating-point representation. It's why exact comparison is not a good idea - some tolerance is required. So instead x == y
, Math.Abs(x - y) < tolerance
should be used.
This is a very old and widely known numeric computation problem. You already said you are looking for a solution other than defining some margin of error to every comparison. An approach that comes to my mind is to build the mathematic expression tree in memory first and do the calculation last. Having this at hand we can do some simplifications using some known rules before doing the calculations. Rules such as:
Therefore instead of storing 1/3 in a decimal/double value which equals 0.33333... we can store an instance of Fraction(1, 3)
. Then we can define all other expressions like this to build an expression rather than doing the calculations. In the end we can first simplify the expression with the rules above and then calculate the result.
I searched the web briefly to find libraries to do that and didn't find any yet. But I'm sure some can be found for other languages/platforms or even .NET.
Please note that the above approach in the end just makes a better result and yet does not solve this problem inherent in the nature of numeric computations.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With