I am used to choosing the smallest data type needed to fully represent my values while preserving semantics. I don't use long
when int
is guaranteed to suffice. Same for int
vs short
.
But for real numbers, in C# there is the commonly used double
-- and no corresponding single
or float
. I can still use System.Single
, but I wonder why C# didn't bother to make it into a language keyword like they did with double
.
In contrast, there are language keywords short
, int
, long
, ushort
, uint
, and ulong
.
So, is this a signal to developers that single-precision is antiquated, deprecated, or should otherwise not be used in favor of double
or decimal
?
(Needless to say, single-precision has the downside of less precision. That's a well-known tradeoff for smaller size, so let's not focus on that.)
edit: My apologies, I mistakenly thought that float
isn't a keyword in C#. But it is, which renders this question moot.
The float
alias represents the .NET System.Single
data type so I would say it's safe to use.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With