Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is multiplication of two numbers a constant time algorithm?

Tags:

c++

operators

Suppose I write,

int a = 111;
int b = 509;
int c = a * b;

So what is the time complexity to compute 'a * b' ? How is the multiplication operation executed?

like image 328
user2560730 Avatar asked Jul 28 '13 14:07

user2560730


1 Answers

Compiling this function:

int f(int a, int b) {
    return a * b;
}

With gcc -O3 -march=native -m64 -fomit-frame-pointer -S gives me the following assembly:

f:
    movl    %ecx, %eax
    imull   %edx, %eax
    ret

The first instruction (movl) loads the first argument, the second instruction (imull) loads the second argument and multiplies it with the first - then the result gets returned.

The actual multiplication is done with imull, which - depending on your CPU type - will take a certain amount of CPU cycles.

If you look at Agner Fog's instruction timing tables you can see how much time each instruction will take. On most x86 processors it seems to be a small constant, however the imul instruction on the AMD K8 with a 64 bit argument and result shows as 4-5 CPU cycles. I don't know if that's a measurement issue or really variable time however.

Also note that there's other factors involved than just the execution time. The integer has to be moved through the processor and get into the right place to get multiplied. All of this and other factors make latency, which is also noted in Agner Fog's tables. There are other issues such as cache issues which also make life more difficult - it's not that easy to simply say how fast something will run without running it.


x86 isn't the only architecture, and it's actually not inconceivable there are CPU's and architectures out there that have non-constant time multiplication. This is especially important for cryptography where algorithms using multiplication might be susceptible to timing attacks on those platforms.

like image 182
orlp Avatar answered Sep 22 '22 07:09

orlp