If you put the following code in your compiler the result is a bit bizar:
decimal x = (276/304)*304;
double y = (276/304)*304;
Console.WriteLine("decimal x = " + x);
Console.WriteLine("double y = " + y);
Result:
decimal x = 275.99999999999999999999999
double y = 276.0
Can someone explain this to me? I don't understand how this can be correct.
276/304 = 69/76 is a recurring "decimal" in both base 10 and base 2.
So the result gets rounded off, and multiplying by the denominator may not result in the orginal numerator. A more commonly-cited example of this situation is 1/3*3 = 0.33333333*3 = 0.99999999.
That the double
version gives the exact answer is just a coincidence. The rounding error in the multiplication just happens to cancel out the rounding error in the division.
If this result is confusing, it may be because you've heard that "double
has rounding errors and decimal
is exact". But decimal
is only exact at representing decimal fractions like 0.1 (which is 0.0 0011 0011... in binary). When you have a factor of 19 in the denominator, it doesn't help you.
Well, floating point precision isn't 100%.
See for example: http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With