Consider the following:
void Foo(int start, int end)
{
int mid = (start + end) / 2;
}
void Bar(int start, int end)
{
int mid = (start + end) * 0.5;
}
Why does Foo
compiles successfully while Bar
does not? Dividing by 2
implicitly casts the result to an int
while multiplying by 0.5
gives an un-casted double
:
Cannot implicitly convert type 'double to int. An explicit conversion exists(are you missing a cast?)
What was the C# language designers' reasoning behind this?
The /
does integer division (5/3 = 1
). To make it do float division one of the operand must be floating point (float
or double
). This is because there are cases when your application wants to get access to the quotient or the remainder of a division (for remainder you use %
). Also, integer division is faster than floating one.
On the other hand, multiplying by a float always gives back a float. To save it to an integer type you have to do the type cast yourself. Floating point values have a different representation in memory and can also lead to loss of precision.
It is the same thing in almost all programming languages: almost all of them have integer division and floating point division, more often using the same operator. Almost all typed languages require a cast from floating point to integral types.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With