In order to work with decimal data types, I have to do this with variable initialization:
decimal aValue = 50.0M;
What does the M part stand for?
The '%m' conversion is a GNU C Library extension. So: printf("%m\n", d); is equivalent to printf("%s\n", strerror (errno), d); which is equivalent to. printf("%s\n", strerror (errno));
It could be a mistyping of the != operator, meaning not equal to. Example: if (a != b) { // a is not equal to b } It could be a mistyping a == ! b , meaning a is equal to not b , which would most commonly be used with booleans.
%d is used to print decimal(integer) number ,while %c is used to print character . If you try to print a character with %d format the computer will print the ASCII code of the character.
It means it's a decimal literal, as others have said. However, the origins are probably not those suggested elsewhere in this answer. From the C# Annotated Standard (the ECMA version, not the MS version):
The
decimal
suffix is M/m since D/d was already taken bydouble
. Although it has been suggested that M stands for money, Peter Golde recalls that M was chosen simply as the next best letter indecimal
.
A similar annotation mentions that early versions of C# included "Y" and "S" for byte
and short
literals respectively. They were dropped on the grounds of not being useful very often.
From C# specifications:
var f = 0f; // float var d = 0d; // double var m = 0m; // decimal (money) var u = 0u; // unsigned int var l = 0l; // long var ul = 0ul; // unsigned long
Note that you can use an uppercase or lowercase notation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With