Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why dividing two integers doesn't get a float? [duplicate]

Tags:

c

Can anyone explain why b gets rounded off here when I divide it by an integer although it's a float?

#include <stdio.h>

void main() {
    int a;
    float b, c, d;
    a = 750;
    b = a / 350;
    c = 750;
    d = c / 350;
    printf("%.2f %.2f", b, d);
    // output: 2.00 2.14
}

http://codepad.org/j1pckw0y

like image 620
mushroom Avatar asked Apr 25 '13 18:04

mushroom


People also ask

What happens if you divide two integers a integer and float?

Dividing two integers will result in an integer (whole number) result. You need to cast one number as a float, or add a decimal to one of the numbers, like a/350.0.

What happens when you divide an integer by a float?

If one of the operands in you division is a float and the other one is a whole number ( int , long , etc), your result's gonna be floating-point. This means, this will be a floating-point division: if you divide 5 by 2, you get 2.5 as expected.

Does division always result in a float?

// int DivisionThe / operator always produce a float. However many algorithms make the most sense if all of the values are kept as ints, so we need a different sort of division operator that produces ints.

What happens when you divide two ints?

When dividing two numbers of the same type (integers, doubles, etc.) the result will always be of the same type (so 'int/int' will always result in int).


4 Answers

This is because of implicit conversion. The variables b, c, d are of float type. But the / operator sees two integers it has to divide and hence returns an integer in the result which gets implicitly converted to a float by the addition of a decimal point. If you want float divisions, try making the two operands to the / floats. Like follows.

#include <stdio.h>

int main() {
    int a;
    float b, c, d;
    a = 750;
    b = a / 350.0f;
    c = 750;
    d = c / 350;
    printf("%.2f %.2f", b, d);
    // output: 2.14 2.14
    return 0;
}
like image 98
Sukrit Kalra Avatar answered Oct 24 '22 00:10

Sukrit Kalra


Use casting of types:

int main() {
    int a;
    float b, c, d;
    a = 750;
    b = a / (float)350;
    c = 750;
    d = c / (float)350;
    printf("%.2f %.2f", b, d);
    // output: 2.14 2.14
}

This is another way to solve that:

 int main() {
        int a;
        float b, c, d;
        a = 750;
        b = a / 350.0; //if you use 'a / 350' here, 
                       //then it is a division of integers, 
                       //so the result will be an integer
        c = 750;
        d = c / 350;
        printf("%.2f %.2f", b, d);
        // output: 2.14 2.14
    }

However, in both cases you are telling the compiler that 350 is a float, and not an integer. Consequently, the result of the division will be a float, and not an integer.

like image 42
Cacho Santa Avatar answered Oct 23 '22 23:10

Cacho Santa


"a" is an integer, when divided with integer it gives you an integer. Then it is assigned to "b" as an integer and becomes a float.

You should do it like this

b = a / 350.0;
like image 2
Goran Belfinger Avatar answered Oct 23 '22 22:10

Goran Belfinger


Specifically, this is not rounding your result, it's truncating toward zero. So if you divide -3/2, you'll get -1 and not -2. Welcome to integral math! Back before CPUs could do floating point operations or the advent of math co-processors, we did everything with integral math. Even though there were libraries for floating point math, they were too expensive (in CPU instructions) for general purpose, so we used a 16 bit value for the whole portion of a number and another 16 value for the fraction.

EDIT: my answer makes me think of the classic old man saying "when I was your age..."

like image 2
Daniel Santos Avatar answered Oct 23 '22 23:10

Daniel Santos