Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Casting float to decimal loses precision in C#

In C# 4.0, the following cast behaves very unexpectedly:

(decimal)1056964.63f
1056965

Casting to double works just fine:

(double)1056964.63f
1056964.625

(decimal)(double)1056964.63f
1056964.625

Is this by design?

like image 522
Yuri Astrakhan Avatar asked Nov 27 '11 07:11

Yuri Astrakhan


1 Answers

The problem is with your initial value - float is only accurate to 7 significant decimal digits anyway:

float f = 1056964.63f;
Console.WriteLine(f); // Prints 1056965

So really the second example is the unexpected one in some ways.

Now the exact value in f is 1056965.625, but that's the value given for all values from about 1056964.563 to 1056964.687 - so even the ".6" part isn't always correct. That's why the docs for System.Single state:

By default, a Single value contains only 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.

The extra information is still preserved when you convert to double, because that's can preserve it without "interpreting" it at all - where converting it to a decimal form (either to print or for the decimal type) goes through code which knows it can't "trust" those last two digits.

like image 198
Jon Skeet Avatar answered Oct 23 '22 11:10

Jon Skeet