Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

C# floating point literals : Why compiler does not DEFAULTS to the left hand side variable type

Tags:

c#

To declare float numbers we need to put 'f' for floats and 'd' for doubles.

Example:

float num1 = 1.23f;   // float stored as float

float num2 = 1.23;    // double stored as float. The code won't compile in C#.

It is said that C# defaults to double if a floating point literal is omitted.

My question, is what prevents a modern language like 'C#' to 'DEFAULTS' to the left hand side variable type? After all, the compiler can see the entire line of code.

Is this for historic reasons in compiler design? or Is there something I'm not getting.

like image 483
911 Avatar asked Nov 02 '14 08:11

911


2 Answers

I would say the main reason is that of predictability and consistency.

Let's imagine you are the compiler and you have to determine the type of the literal 3.14 in the following statements:


float p = 3.14;

The first one was easy. Obviously, should be a float.


What about this one?

var x = 3.14 * 10.0f;

The user explicitly added an f after 10.0. So that should be a float, what about 3.14? Should x be a double or a float?


And this one:

bool b = 3.14 / 3.14f == 1;

Is the first 3.14 a double or a float? The value of b depends on it!


As a user, I'd rather explicitly add an f and know exactly what I'm getting.

like image 173
Rotem Avatar answered Sep 21 '22 17:09

Rotem


Someone else here can probably explain why the .NET developers decided against using some sort of prediction to determine how a number literal should be shoehorned into the type you're providing to it. But in the absence of any sort of philosophy, the answer is simple: they didn't want to do that, so they didn't. I'm not going to tell you that you outright shouldn't care about this, because of course it's not totally unreasonable to want a compiler to understand exactly what you want. It'd at least be nice, right?

(If I had to guess, though, I'd say it has a lot to do with allowing some sort of logical implicit typing - float f = 1.0 is float, but var d = 1.0 is double? This gets even less logical when you consider that f.Equals(d) would be false, despite the fact that 1.0 == 1.0 is true.)

But, assuming they had good reasons not to introduce any type-definition-specific prediction (and some more perfectly good reasons can be found in Rotem's post), the best reason I can imagine they decided that double num1 = 1.23 is acceptable, and float num1 = 1.23 is not, is that double is frankly more useful in most cases. The CPU and IO penalties of using a 64-bit floating point value as opposed to 32-bit are negligible in most use cases, but the usefulness of not having to worry about exceeding the bounds of that data type is significant.

Note that you could make the exact same argument about why 1.23 can't be assigned to a decimal without a suffix. The people who designed C# decided it was easier to make assumptions about the type of a number literal of any given format, and double is a totally reasonable assumption given that the majority of people who write 1.23 want a double, or at least should be using one.

edit: user fixed title.

like image 27
furkle Avatar answered Sep 20 '22 17:09

furkle