I am working in C# code that assigns decimal values in the following ways. Are these the same values, or is there a difference?
decimal a = 0;
decimal b = 0m;
decimal c = 0.00m;
If a zero is leading a number, before or after the decimal, it is not significant. E.g. 0.00849 - 3 significant figures. If a zero is trailing a non-zero digit, but it is not behind a decimal, it is not significant. E.g. 4500 - 2 significant figures.
We've learned that decimal numbers are the numbers with decimal points in them. Leading zeros are the zeros in front of a number, and trailing zeros are the zeros after a number. You can think of trailing and leading zeros as switching roles on either side of the decimal point.
Just adding a tiny bit more practical information to the other good answers
Decimals have many internal representations of 0
, however they will all equal zero when compared
Decimal Struct
The binary representation of a Decimal value consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the 96-bit integer and specify what portion of it is a decimal fraction. The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28. Therefore, the binary representation of a Decimal value the form, ((-296 to 296) / 10(0 to 28)), where -(296-1) is equal to MinValue, and 296-1 is equal to MaxValue.
The scaling factor also preserves any trailing zeros in a Decimal number. Trailing zeros do not affect the value of a Decimal number in arithmetic or comparison operations. However, trailing zeros might be revealed by the ToString method if an appropriate format string is applied.
Example of changing the scaling factor
string GetBits(decimal d)
{
var bits = decimal.GetBits(d);
return $"{d==0} {d,31} {bits[3],10:X8}{bits[2],10:X8}{bits[1],10:X8}{bits[0],10:X8}";
}
Console.WriteLine(GetBits(0));
Console.WriteLine(GetBits(0.0m));
Console.WriteLine(GetBits(0.000m));
// Manually set the Scaling Factor and Sign
Console.WriteLine(GetBits(new decimal(0,0,0,true,10)));
Output
Equals 0 ToString Other Hi Mid Lo
------------------------------------------------------------------------------
True 0 00000000 00000000 00000000 00000000
True 0.0 00010000 00000000 00000000 00000000
True 0.000 00030000 00000000 00000000 00000000
True 0.0000000000 800A0000 00000000 00000000 00000000
There is indeed a difference, but it is one that likely won't be a problem if you know about it.
Firstly as pointed out by Ben Cottrell in his answer all of those values will test equal. In fact decimal a = 0;
will cast the 0
value to 0m
which makes it actually identical to b
. Both a
and b
will test as equal to 0.00m
or any other variation of number of decimal places.
Where the difference comes in is when you're looking at the internals. This is only really relevant when you're serializing the decimals as byte arrays or using the array returned by decimal.GetBits()
. In that case 0M
is 16 x 0 bytes while 0.00M
has a scale of 2, so one of the bytes in the binary representation (byte 14) is non-zero.
Also the string form of the two will differ, so comparing strings will fail:
decimal a = 0M;
decimal b = 0.00M;
if (a.ToString() != b.ToString())
Console.WriteLine($"'{a}' != '{b}'");
So while they are equal they are still different.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With