Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Differing casting behavior [duplicate]

This is really weird. I was tracking down this error:

Negating the minimum value of a twos complement number is invalid.

...and it turned out that it was due to code like this:

var valueFromUser = "470259123000000";
var doubleValue = Convert.ToDouble(valueFromUser, CultureInfo.InvariantCulture);
Math.Abs((int)doubleValue);

Indeed, when I run this in LINQPad:

(int)Convert.ToDouble("470259123000000", CultureInfo.InvariantCulture)

...it gives me:

-2147483648

However, another developer here says he gets something completely different (not in LINQPad):

-1141206336

When I try to evaluate just the cast by itself on a constant:

(int)470259123000000.0

...I get a compile error because of needing unchecked. And this:

unchecked((int)470259123000000.0)

...evaluates to -1141206336 like the other developer got. So I thought maybe the Convert created a subtly different value than the constant. Nope, this evaluates to True:

Convert.ToDouble("470259123000000", CultureInfo.InvariantCulture) == 470259123000000.0

What the heck is going on here? Why does evaluating these seemingly identical expressions yield such different results?

Update:

Found a hint. The hex representation of 4.70259123E14 and -1141206336 is:

0x42FABB2BBFA92C00
       0xBBFA92C0

So I guess one of the casts is shoving the bits directly into the int. So -2147483648 is the larger mystery.

like image 237
Jacob Avatar asked Mar 25 '15 02:03

Jacob


1 Answers

I'm not exactly sure of the underlying reason, but it looks like a compiler bug, because a program compiled with Roslyn gives the same value (-2147483648) for both expressions.

The compiler is allowed to evaluate constant expressions at compile-time. All the conversions with the unchecked expression are done by the compiler, but in the other case they are done at run time by the CLR, so there is always the possibility that they use slightly different rules. As you have observed, the compiler appears to truncate differently than the runtime to fit the value into a 32-bit integer. You can see in the underlying IL that the program is just loading the constant value (0xbbfa92c0) in place of the unchecked expression.

using System;
using System.Globalization;

public class Program
{
    public static void Main(string[] args)
    {
        var n = unchecked((int)470259123000000.0);
        Console.WriteLine(n);

        n = (int)Convert.ToDouble("470259123000000", CultureInfo.InvariantCulture);
        Console.WriteLine(n);
    }
}

Decompiled IL from .NET 4.5 compiler:

  .method public hidebysig static void  Main(string[] args) cil managed
  {
    // 
    .maxstack  2
    .locals init (int32 V_0)
    IL_0000:  nop
    IL_0001:  ldc.i4     0xbbfa92c0
    IL_0006:  stloc.0
    IL_0007:  ldloc.0
    IL_0008:  call       void [mscorlib]System.Console::WriteLine(int32)
    IL_000d:  nop
    IL_000e:  ldstr      "470259123000000"
    IL_0013:  call       class [mscorlib]System.Globalization.CultureInfo [mscorlib]System.Globalization.CultureInfo::get_InvariantCulture()
    IL_0018:  call       float64 [mscorlib]System.Convert::ToDouble(string,
                                                                    class [mscorlib]System.IFormatProvider)
    IL_001d:  conv.i4
    IL_001e:  stloc.0
    IL_001f:  ldloc.0
    IL_0020:  call       void [mscorlib]System.Console::WriteLine(int32)
    IL_0025:  nop
    IL_0026:  ret
  } // end of method Program::Main
like image 51
Mike Zboray Avatar answered Oct 18 '22 02:10

Mike Zboray