Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why are two decimals with the same value formatted differently

I have two little pieces of code. In my opinion, they should result in the same string, but they don't:

(1.23M * 100M).ToString()

results in:

123,00

and

(123M).ToString()   

results in:

123

My very simple question is: can somebody explain me why this (strange?) behavior happens?

like image 732
Martin Mulder Avatar asked Mar 24 '23 06:03

Martin Mulder


2 Answers

The decimal type is represented by an integer scaled by a factor of 10. From the documentation for decimal:

The scaling factor also preserves any trailing zeros in a Decimal number. Trailing zeros do not affect the value of a Decimal number in arithmetic or comparison operations. However, trailing zeros might be revealed by the ToString method if an appropriate format string is applied.

Using GetBits you can see that 123.00M is represented as 12300 / 102 while 123M is 123 / 100.

Edit

I took a simple program that demostrates the issue:

class Program
{

    static void Main(string[] args)
    {
        Console.WriteLine((1.23M * 100M).ToString());
        Console.WriteLine((123M).ToString());
    }

}

I looked at the generated IL:

.method private hidebysig static void  Main(string[] args) cil managed
{
  .entrypoint
  // Code size       51 (0x33)
  .maxstack  6
  .locals init ([0] valuetype [mscorlib]System.Decimal CS$0$0000)
  IL_0000:  nop
  IL_0001:  ldc.i4     0x300c
  IL_0006:  ldc.i4.0
  IL_0007:  ldc.i4.0
  IL_0008:  ldc.i4.0
  IL_0009:  ldc.i4.2
  IL_000a:  newobj     instance void [mscorlib]System.Decimal::.ctor(int32,
                                                                     int32,
                                                                     int32,
                                                                     bool,
                                                                     uint8)
  IL_000f:  stloc.0
  IL_0010:  ldloca.s   CS$0$0000
  IL_0012:  call       instance string [mscorlib]System.Decimal::ToString()
  IL_0017:  call       void [mscorlib]System.Console::WriteLine(string)
  IL_001c:  nop
  IL_001d:  ldc.i4.s   123
  IL_001f:  newobj     instance void [mscorlib]System.Decimal::.ctor(int32)
  IL_0024:  stloc.0
  IL_0025:  ldloca.s   CS$0$0000
  IL_0027:  call       instance string [mscorlib]System.Decimal::ToString()
  IL_002c:  call       void [mscorlib]System.Console::WriteLine(string)
  IL_0031:  nop
  IL_0032:  ret
} // end of method Program::Main

We can see that the compiler actually optimized away the multiplication and inserted a call to a construct a single decimal instance for the first case. The two instances are using different representations. They are basically what I described above.

like image 95
Mike Zboray Avatar answered Apr 06 '23 20:04

Mike Zboray


They're two different values, bitwise. Unlike double, decimal does not automatically normalize - it looks like it's retained the information that at one point you had two decimal places. You can see the exact same difference without the multiplication:

Console.WriteLine(123m)
Console.WriteLine(123.00m);

The documentation is somewhat unclear (from what I can see) about exactly how the result of operations on decimal values is performed, in terms of how many decimal places are retained. (I wouldn't be surprised to learn that it's standardised somewhere...)

like image 20
Jon Skeet Avatar answered Apr 06 '23 19:04

Jon Skeet