Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Strange compiler behavior with float literals vs float variables

I have noticed an interesting behavior with float rounding / truncation by the C# compiler. Namely, when a float literal is beyond the guaranteed representable range (7 decimal digits), then a) explicitly casting a float result to float (a semantically unnecessary operation) and b) storing intermediate calculation results in a local variable both change the output. An example:

using System;

class Program
{
    static void Main()
    {
        float f = 2.0499999f;
        var a = f * 100f;
        var b = (int) (f * 100f);
        var c = (int) (float) (f * 100f);
        var d = (int) a;
        var e = (int) (float) a;
        Console.WriteLine(a);
        Console.WriteLine(b);
        Console.WriteLine(c);
        Console.WriteLine(d);
        Console.WriteLine(e);
    }
}

The output is:

205
204
205
205
205

In the JITted debug build on my computer, b is calculated as follows:

          var b = (int) (f * 100f);
0000005a  fld         dword ptr [ebp-3Ch] 
0000005d  fmul        dword ptr ds:[035E1648h] 
00000063  fstp        qword ptr [ebp-5Ch] 
00000066  movsd       xmm0,mmword ptr [ebp-5Ch] 
0000006b  cvttsd2si   eax,xmm0 
0000006f  mov         dword ptr [ebp-44h],eax 

whereas d is calculated as

          var d = (int) a;
00000096  fld         dword ptr [ebp-40h] 
00000099  fstp        qword ptr [ebp-5Ch] 
0000009c  movsd       xmm0,mmword ptr [ebp-5Ch] 
000000a1  cvttsd2si   eax,xmm0 
000000a5  mov         dword ptr [ebp-4Ch],eax 

Finally, my question: why is the second line of the output different from the fourth? Does that extra fmul make such a difference? Also note that if the last (already unrepresentable) digit from the float f is removed or even reduced, everything "falls in place".

like image 944
Alan Avatar asked Jun 21 '10 20:06

Alan


2 Answers

Your question can be simplified to asking why these two results are different:

float f = 2.0499999f;
var a = f * 100f;
var b = (int)(f * 100f);
var d = (int)a;
Console.WriteLine(b);
Console.WriteLine(d);

If you look at the code in .NET Reflector you can see that the above code is actually compiled as if it were the following code:

float f = 2.05f;
float a = f * 100f;
int b = (int) (f * 100f);
int d = (int) a;
Console.WriteLine(b);
Console.WriteLine(d);

Floating point calculations cannot always be made exactly. The result of 2.05 * 100f is not exactly equal to 205, but just a little less due to rounding errors. When this intermediate result is converted to an integer is truncated. When stored as a float it is rounded to the nearest representable form. These two methods of rounding give different results.


Regarding your comment to my answer when you write this:

Console.WriteLine((int) (2.0499999f * 100f));
Console.WriteLine((int)(float)(2.0499999f * 100f));

The calculations are done entirely in the compiler. The above code is equivalent to this:

Console.WriteLine(204);
Console.WriteLine(205);
like image 189
Mark Byers Avatar answered Nov 10 '22 07:11

Mark Byers


In a comment you asked

Are these rules different?

Yes. Or, rather, the rules allow for different behaviour.

And if yes, am I supposed to know this, either from the C# language reference doc or MSDN, or is this just an occasional discrepancy between the compiler and the runtime

It's implied by the specification. Floating point operations have a certain minimum level of precision that must be met, but the compiler or runtime is permitted to use more precision if it sees fit. That can cause large, observable changes when you do operations that magnify small changes. Rounding, for example, can turn an extremely small change into an extremely large one.

This fact leads to fairly frequently asked questions here. For some background on this situation and other situations that can produce similar discrepancies, see the following:

Why does this floating-point calculation give different results on different machines?

C# XNA Visual Studio: Difference between "release" and "debug" modes?

CLR JIT optimizations violates causality?

https://stackoverflow.com/questions/2494724

like image 27
Eric Lippert Avatar answered Nov 10 '22 08:11

Eric Lippert