Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Any reason to prefer single precision to double precision data type?

Tags:

c#

.net

I am used to choosing the smallest data type needed to fully represent my values while preserving semantics. I don't use long when int is guaranteed to suffice. Same for int vs short.

But for real numbers, in C# there is the commonly used double -- and no corresponding single or float. I can still use System.Single, but I wonder why C# didn't bother to make it into a language keyword like they did with double.

In contrast, there are language keywords short, int, long, ushort, uint, and ulong.

So, is this a signal to developers that single-precision is antiquated, deprecated, or should otherwise not be used in favor of double or decimal?

(Needless to say, single-precision has the downside of less precision. That's a well-known tradeoff for smaller size, so let's not focus on that.)

edit: My apologies, I mistakenly thought that float isn't a keyword in C#. But it is, which renders this question moot.

like image 587
Philip Avatar asked Nov 29 '22 16:11

Philip


1 Answers

The float alias represents the .NET System.Single data type so I would say it's safe to use.

like image 177
Mike Perrenoud Avatar answered Dec 14 '22 23:12

Mike Perrenoud