MSDN says:
"Without the suffix m, the number is treated as a double, thus generating a compiler error."
What does the "M" in:
decimal current = 10.99M;
stand for?
Is it any different than:
decimal current = (decimal)10.99
A real literal suffixed by M or m is of type decimal (money). For example, the literals 1m, 1.5m, 1e10m, and 123.456M are all of type decimal. This literal is converted to a decimal value by taking the exact value, and, if necessary, rounding to the nearest representable value using banker's rounding.
Notice the "f" and "m" after the numbers - it tells the compiler that we are assigning a float and a decimal value. Without it, C# will interpret the numbers as double, which can't be automatically converted to either a float or decimal.
{ DECIMAL | DEC } [(precision [, scale ])] The precision must be between 1 and 31. The scale must be less than or equal to the precision. If the scale is not specified, the default scale is 0. If the precision is not specified, the default precision is 5.
To initialize a decimal variable, use the suffix m or M. Like as, decimal x = 300.5m;. If the suffix m or M will not use then it is treated as double.
M makes the number a decimal representation in code.
To answer the second part of your question, yes they are different.
decimal current = (decimal)10.99
is the same as
double tmp = 10.99; decimal current = (decimal)tmp;
Now for numbers larger than sigma it should not be a problem but if you meant decimal you should specify decimal.
Update:
Wow, i was wrong. I went to go check the IL to prove my point and the compiler optimized it away.
Update 2:
I was right after all!, you still need to be careful. Compare the output of these two functions.
class Program { static void Main(string[] args) { Console.WriteLine(Test1()); Console.WriteLine(Test2()); Console.ReadLine(); } static decimal Test1() { return 10.999999999999999999999M; } static decimal Test2() { return (decimal)10.999999999999999999999; } }
The first returns 10.999999999999999999999
but the seccond returns 11
Just as a side note, double will get you 15 decimal digits of precision but decimal will get you 96 bits of precision with a scaling factor from 0 to 28. So you can represent any number in the range ((-296 to 296) / 10(0 to 28))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With