As regards best practices, is there a meaningful difference between using:
Double d;
and
double d;
I know best practices are fraught with contradictions, so I know the answers may vary here. I just want to know the pragmatic difference between the two.
There is no difference. double is just an alias for System. Double in C#.
Use double for non-integer math where the most precise answer isn't necessary. Use decimal for non-integer math where precision is needed (e.g. money and currency). Use int by default for any integer-based operations that can use that type, as it will be more performant than short or long .
Decimal is for exact values. Double is for approximate values. Decimal is not for exact values. Decimal provides 28-29 decimal digits of accuracy according to the documentation.
The double is a fundamental data type built into the compiler and used to define numeric variables holding numbers with decimal points. C, C++, C# and many other programming languages recognize the double as a type. A double type can represent fractional as well as whole values.
There is no difference. double
is just an alias for System.Double in C#.
Note that VB.NET doesn't have the same aliasing (int for System.Int32, double for System.Double, etc), so the aliasing is just applicable to C#, not .NET as a whole.
No, there's no difference: double
is a C# keyword that's an alias for the System.Double
type.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With