Suppose I write,
int a = 111;
int b = 509;
int c = a * b;
So what is the time complexity to compute 'a * b' ? How is the multiplication operation executed?
Compiling this function:
int f(int a, int b) {
return a * b;
}
With gcc -O3 -march=native -m64 -fomit-frame-pointer -S
gives me the following assembly:
f:
movl %ecx, %eax
imull %edx, %eax
ret
The first instruction (movl
) loads the first argument, the second instruction (imull
) loads the second argument and multiplies it with the first - then the result gets returned.
The actual multiplication is done with imull
, which - depending on your CPU type - will take a certain amount of CPU cycles.
If you look at Agner Fog's instruction timing tables you can see how much time each instruction will take. On most x86 processors it seems to be a small constant, however the imul
instruction on the AMD K8 with a 64 bit argument and result shows as 4-5
CPU cycles. I don't know if that's a measurement issue or really variable time however.
Also note that there's other factors involved than just the execution time. The integer has to be moved through the processor and get into the right place to get multiplied. All of this and other factors make latency, which is also noted in Agner Fog's tables. There are other issues such as cache issues which also make life more difficult - it's not that easy to simply say how fast something will run without running it.
x86 isn't the only architecture, and it's actually not inconceivable there are CPU's and architectures out there that have non-constant time multiplication. This is especially important for cryptography where algorithms using multiplication might be susceptible to timing attacks on those platforms.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With