If one has to calculate a fraction of a given int value, say:
int j = 78;
int i = 5* j / 4;
Is this faster than doing:
int i = 1.25*j; // ?
If it is, is there a conversion factor one could use to decide which to use, as in how many int
divisions can be done in the same time a one float
multiplication?
Edit: I think the comments make it clear that the floating point math will be slower, but the question is, by how much? If I need to replace each float
multiplication by N int
divisions, for what N will this not be worth it anymore?
You've said all the values are dynamic, which makes a difference. For the specific values 5 * j / 4
, the integer operations are going to be blindingly fast, because pretty much the worst case is that the compiler optimises them to two shifts and one addition, plus some messing around to cope with the possibility that j
is negative. If the CPU can do better (single-cycle integer multiplication or whatever) then the compiler typically knows about it. The limits of compilers' abilities to optimize this kind of thing basically come when you're compiling for a wide family of CPUs (generating lowest-common-denominator ARM code, for example), where the compiler doesn't really know much about the hardware and therefore can't always make good choices.
I suppose that if a
and b
are fixed for a while (but not known at compile time), then it's possible that computing k = double(a) / b
once and then int(k * x)
for many different values of x
, might be faster than computing a * x / b
for many different values of x
. I wouldn't count on it.
If all the values vary each time, then it seems unlikely that the floating-point division to compute the 1.25
, followed by floating-point multiplication, is going to be any faster than the integer multiplication followed by integer division. But you never know, test it.
It's not really possible to give simple relative timings for this on modern processors, it really depends a lot on the surrounding code. The main costs in your code often aren't the "actual" ops: it's "invisible" stuff like instruction pipelines stalling on dependencies, or spilling registers to stack, or function call overhead. Whether or not the function that does this work can be inlined might easily make more difference than how the function actually does it. As far as definitive statements of performance are concerned you can basically test real code or shut up. But the chances are that if your values start as integers, doing integer ops on them is going to be faster than converting to double
and doing a similar number of double
ops.
It is impossible to answer this question out of context. Additionally 5*j/4
does not generally produce the same result as (int) (1.25*j)
, due to properties of integer and floating-point arithmetic, including rounding and overflow.
If your program is doing mostly integer operations, then the conversion of j
to floating point, multiplication by 1.25, and conversion back to integer might be free because it uses floating-point units that are not otherwise engaged.
Alternatively, on some processors, the operating system might mark the floating-point state to be invalid, so that the first time a process uses it, there is an exception, the operating system saves the floating-point registers (which contain values from another process), restores or initializes the registers for your process, and returns from the exception. This would take a great deal of time, relative to normal instruction execution.
The answer also depends on characteristics of the specific processor model the program is executing on, as well as the operating system, how the compiler translates the source into assembly, and possibly even what other processes on the system are doing.
Also, the performance difference between 5*j/4
and (int) (1.25*j)
is most often too small to be noticeable in a program unless it or operations like it are repeated a great many times. (And, if they are, there may be huge benefits to vectorizing the code, that is, using the Single Instruction Multiple Data [SIMD] features of many modern processors to perform several operations at once.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With