Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why c# decimals can't be initialized without the M suffix?

public class MyClass {     public const Decimal CONSTANT = 0.50; // ERROR CS0664    } 

produces this error:

error CS0664: Literal of type double cannot be implicitly converted to type 'decimal'; use an 'M' suffix to create a literal of this type

as documented. But this works:

public class MyClass {     public const Decimal CONSTANT = 50; // OK    } 

And I wonder why they forbid the first one. It seems weird to me.

like image 261
onof Avatar asked Aug 04 '11 14:08

onof


People also ask

Why C is the best language?

The programs that you write in C compile and execute much faster than those written in other languages. This is because it does not have garbage collection and other such additional processing overheads. Hence, the language is faster as compared to most other programming languages.

Why is C used?

C is a general-purpose programming language and can efficiently work on enterprise applications, games, graphics, and applications requiring calculations, etc. C language has a rich library which provides a number of built-in functions. It also offers dynamic memory allocation.

Why is C used in C?

%d is used to print decimal(integer) number ,while %c is used to print character . If you try to print a character with %d format the computer will print the ASCII code of the character.

Why should you learn C?

C is very fast in terms of execution time. Programs written and compiled in C execute much faster than compared to any other programming language. C programming language is very fast in terms of execution as it does not have any additional processing overheads such as garbage collection or preventing memory leaks etc.


1 Answers

The type of a literal without the m suffix is double - it's as simple as that. You can't initialize a float that way either:

float x = 10.0; // Fail 

The type of the literal should be made clear from the literal itself, and the type of variable it's assigned to should be assignable to from the type of that literal. So your second example works because there's an implicit conversion from int (the type of the literal) to decimal. There's no implicit conversion from double to decimal (as it can lose information).

Personally I'd have preferred it if there'd been no default or if the default had been decimal, but that's a different matter...

like image 182
Jon Skeet Avatar answered Sep 30 '22 12:09

Jon Skeet