In C# 4.0, the following cast behaves very unexpectedly:
(decimal)1056964.63f
1056965
Casting to double works just fine:
(double)1056964.63f
1056964.625
(decimal)(double)1056964.63f
1056964.625
Is this by design?
The problem is with your initial value - float
is only accurate to 7 significant decimal digits anyway:
float f = 1056964.63f;
Console.WriteLine(f); // Prints 1056965
So really the second example is the unexpected one in some ways.
Now the exact value in f
is 1056965.625, but that's the value given for all values from about 1056964.563 to 1056964.687 - so even the ".6" part isn't always correct. That's why the docs for System.Single
state:
By default, a Single value contains only 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
The extra information is still preserved when you convert to double
, because that's can preserve it without "interpreting" it at all - where converting it to a decimal form (either to print or for the decimal
type) goes through code which knows it can't "trust" those last two digits.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With