Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Rounding float to nearest integer in C

Tags:

c

I might have a noob question here, but searching the site hasn't yelded anything. I'm learning to program in C and I'm trying to build a function from scratch that rounds floats to the nearest integer, without using math.h. Here's my code:

void main()
{
    float b;
    for(b = 0; b <= 2; b = b + 0.1)
    {
        printf("%f    ", b);
        printf("%i    ", (int)b);
        printf("%f    ", b - (int)b);
        printf("Nearest: ");
        if((b - (int)b)<0.5)
            printf("%i    ", (int)b);
        else
            printf("%i    ", (int)b + 1);
        printf("Function: %i    ", round_near(b));
        printf("\n");
    }
    getchar();
}

int round_near(float b)
{
    if((b - (int)b)<0.5)
        return(int)b;
    else
        return (int)b + 1;
}

My results looks like this:

enter image description here

Some of the code is superfluous and was just meant to see some of the individual steps of my function. What gives? Are there some shenanigans with float type variables I'm not aware of?

like image 398
Vlad Danila Avatar asked Jan 09 '23 00:01

Vlad Danila


1 Answers

You don't have a prototype for int round_near(float b), so you're relying on implicit declarations.

Try adding this to your code.

int round_near (float b); // Prototype

int main(void) // Nitpick: main returns an int!

Using implicit declarations for round_near(b), b is being promoted to a double. But the definition assumes it's a float, which has a different binary layout, so you get crazy random results.

You should make sure your code compiles without any warnings to avoid this sort of stuff. The only reason implicit declaration is in the language is for backwards compatibility, but every compiler for the last decade or two warns you that it's bad on compile.

like image 167
QuestionC Avatar answered Jan 18 '23 20:01

QuestionC