Multiplication and division can be achieved using bit operators, for example
i*2 = i<<1 i*3 = (i<<1) + i; i*10 = (i<<3) + (i<<1)
and so on.
Is it actually faster to use say (i<<3)+(i<<1)
to multiply with 10 than using i*10
directly? Is there any sort of input that can't be multiplied or divided in this way?
Multiplication is faster than division.
Shifting bits left and right is apparently faster than multiplication and division operations on most, maybe even all, CPUs if you happen to be using a power of 2. However, it can reduce the clarity of code for some readers and some algorithms.
Yes, indeed, there is a difference. A loop with a million multiplies by 0.5 took 0.11 seconds and a loop with a million divides by 2 took 1.6 seconds. So it's true for the RPG (and probably for the IBM i) that multiplying is quicker than dividing.
Most bitwise operations are presented as two-operand instructions where the result replaces one of the input operands. On simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster than multiplication, and sometimes significantly faster than addition.
Short answer: Not likely.
Long answer: Your compiler has an optimizer in it that knows how to multiply as quickly as your target processor architecture is capable. Your best bet is to tell the compiler your intent clearly (i.e. i*2 rather than i << 1) and let it decide what the fastest assembly/machine code sequence is. It's even possible that the processor itself has implemented the multiply instruction as a sequence of shifts & adds in microcode.
Bottom line--don't spend a lot of time worrying about this. If you mean to shift, shift. If you mean to multiply, multiply. Do what is semantically clearest--your coworkers will thank you later. Or, more likely, curse you later if you do otherwise.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With