Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What does the "M" stand for in decimal value assignment?

Tags:

MSDN says:

"Without the suffix m, the number is treated as a double, thus generating a compiler error."

What does the "M" in:

decimal current = 10.99M; 

stand for?

Is it any different than:

decimal current = (decimal)10.99 
like image 632
JCisar Avatar asked Apr 18 '12 19:04

JCisar


People also ask

What does M mean in decimals?

A real literal suffixed by M or m is of type decimal (money). For example, the literals 1m, 1.5m, 1e10m, and 123.456M are all of type decimal. This literal is converted to a decimal value by taking the exact value, and, if necessary, rounding to the nearest representable value using banker's rounding.

What is M after number C#?

Notice the "f" and "m" after the numbers - it tells the compiler that we are assigning a float and a decimal value. Without it, C# will interpret the numbers as double, which can't be automatically converted to either a float or decimal.

How do you declare decimals?

{ DECIMAL | DEC } [(precision [, scale ])] The precision must be between 1 and 31. The scale must be less than or equal to the precision. If the scale is not specified, the default scale is 0. If the precision is not specified, the default precision is 5.

How do you initialize a decimal value?

To initialize a decimal variable, use the suffix m or M. Like as, decimal x = 300.5m;. If the suffix m or M will not use then it is treated as double.


1 Answers

M makes the number a decimal representation in code.

To answer the second part of your question, yes they are different.

decimal current = (decimal)10.99 

is the same as

double tmp = 10.99; decimal current = (decimal)tmp; 

Now for numbers larger than sigma it should not be a problem but if you meant decimal you should specify decimal.


Update:

Wow, i was wrong. I went to go check the IL to prove my point and the compiler optimized it away.


Update 2:

I was right after all!, you still need to be careful. Compare the output of these two functions.

class Program {     static void Main(string[] args)     {         Console.WriteLine(Test1());         Console.WriteLine(Test2());         Console.ReadLine();     }      static decimal Test1()     {         return 10.999999999999999999999M;     }     static decimal Test2()     {         return (decimal)10.999999999999999999999;     } } 

The first returns 10.999999999999999999999 but the seccond returns 11


Just as a side note, double will get you 15 decimal digits of precision but decimal will get you 96 bits of precision with a scaling factor from 0 to 28. So you can represent any number in the range ((-296 to 296) / 10(0 to 28))

like image 50
Scott Chamberlain Avatar answered Oct 24 '22 05:10

Scott Chamberlain