I just ran into a situation in Objective-C where:
NSLog(@"%i", (int) (0.2 * 10)); // prints 2
NSLog(@"%i", (int) ((1.2 - 1) * 10)); // prints 1
so I wonder, if the value is a float or double, and we want an integer, should we never just use (int)
to do the casting, but use (int) round(someValue)
? Or, to flip the question around, when should we just use (int)
, but in those situations, can't (int) round(someValue)
can also do the job, so we should almost always use (int) round(someValue)
?
The issue here is not converting floating-point values to integer but the rounding errors that occur with floating-point numbers. When you use floating-point, you should understand it well.
The common implementation of float and double use IEEE 754 binary floating-point values. These values are represented as a significand multiplied by a power of two multiplied by a sign (+1 or -1). For floats, the significand is a 24-bit binary numeral, with one bit before the “decimal point” (or “binary point” if you prefer). E.g., the number 1.25 has a significand of 1.01000000000000000000000. For doubles, the significand is a 53-bit binary numeral, with one bit before the point.
Because of this, the values of the decimal numerals .1 and 1.1 cannot be exactly represented. They must be approximated. When you write “.1” or “1.1” in source code, the compiler will convert it to a double that is very near the actual value. Sometimes that result will be slightly greater than the actual value. Sometimes it will be slightly lower than the actual value.
When you convert float to int, the result is (by definition) only the integer portion of the value (the value is truncated toward zero). So, if your value was slightly greater than a positive integer, you get the integer. If your value was slightly less than a positive integer, you get the next lower integer.
If you expect that exact mathematics would give you an integer results, and the floating-point operations you are performing are so few and so simple that the errors have not accumulated to much, then you can round the floating-point value to an integer, using the round function. In simple situations, you can also round by adding .5 before truncation, as by writing “(int) (f + .5)”.
It depends on what you want. Obviously, a straight cast to int
will be faster than a call to round
, whereas round will give out more accurate values.
Unless you are doing code that relies upon speed to be effective (in which case, floating point values might not be the best to use, either), I would say that it's worth it to call round
. Even if it only changes something you display on-screen by one pixel, when dealing with certain things (angle measure, colors, etc.) The more accuracy you can have the better.
EDIT: Simple test to back up my claim of casting being faster than rounding:
Tested on Macbook Pro:
Code:
int value;
void test_cast()
{
clock_t start = clock();
value = 0;
for (int i = 0; i < 1000 * 1000; i++)
{
value += (int) (((i / 1000.0) - 1.0) * 10.0);
}
printf("test_cast: %lu\n", clock() - start);
}
void test_round()
{
clock_t start = clock();
value = 0;
for (int i = 0; i < 1000 * 1000; i++)
{
value += round(((i / 1000.0) - 1.0) * 10.0);
}
printf("test_round: %lu\n", clock() - start);
}
int main()
{
test_cast();
test_round();
}
Results:
test_cast: 11895 test_round: 14353
Note: I know that clock()
isn't the best profiling function, but it does show that round()
at least uses more CPU cycles.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With