Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Dividing by 2 vs Multiplying by 0.5

Tags:

c#

Consider the following:

void Foo(int start, int end)
{
    int mid = (start + end) / 2;
}

void Bar(int start, int end)
{
    int mid = (start + end) * 0.5;
}

Why does Foo compiles successfully while Bar does not? Dividing by 2 implicitly casts the result to an int while multiplying by 0.5 gives an un-casted double:

Cannot implicitly convert type 'double to int. An explicit conversion exists(are you missing a cast?)

What was the C# language designers' reasoning behind this?

like image 651
Max Avatar asked Apr 06 '14 11:04

Max


1 Answers

The / does integer division (5/3 = 1). To make it do float division one of the operand must be floating point (float or double). This is because there are cases when your application wants to get access to the quotient or the remainder of a division (for remainder you use %). Also, integer division is faster than floating one.

On the other hand, multiplying by a float always gives back a float. To save it to an integer type you have to do the type cast yourself. Floating point values have a different representation in memory and can also lead to loss of precision.

It is the same thing in almost all programming languages: almost all of them have integer division and floating point division, more often using the same operator. Almost all typed languages require a cast from floating point to integral types.

like image 119
Mihai Maruseac Avatar answered Sep 22 '22 08:09

Mihai Maruseac