Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Optimizing x64 assembler MUL loop

Tags:

I'm writing math code which needs to multiply large numbers fast. It breaks down to multiplications of an array of integers with a single integer. In C++ this looks like this (on unsigned's):

void muladd(unsigned* r, const unsigned* a, unsigned len, unsigned b) {
   unsigned __int64 of = 0;  // overflow
   unsigned i = 0;  // loop variable
   while (i < len) {
      of += (unsigned __int64)a[i] * b + r[i];
      r[i] = (unsigned)of;
      of >>= 32;
      ++i;
   }
   r[i] = (unsigned)of;  // save overflow
}

I unrolled this loop manually, converted it to 64 bit and worked on the .asm compiler output to optimize it further. The main .asm loop now looks like this:

mov   rax, rdi                             ; rdi = b
mul   QWORD PTR [rbx+r10*8-64]             ; rdx:rax = a[i] * b; r10 = i
mov   rsi, QWORD PTR [r14+r10*8-64]        ; r14 = r; rsi = r[i]
add   rax, rsi
adc   rdx, 0
add   rax, r11                             ; r11 = of (low part)
adc   rdx, 0
mov   QWORD PTR [r14+r10*8-64], rax        ; save result
mov   r11, rdx

; this repeats itself 8 times with different offsets

When I benchmark this, I find that it takes about 6.3 cycles on avarage per multiplication on my Core2 Quad.

My question is: can I speed this up somehow? Unfortunately, I see no way to avoid one of the additions and the multiplication always needs RDX:RAX, so I need to move the data around and can not sort of "multiply in parallel".

Any ideas anyone?

Update: After some more testing, I've managed to bring the speed up to about 5.4 cycles per 64-bit MUL (that includes all add, move and loop overhead). I guess this is about the best you can get on a Core2, since the Core2 does not have a very fast MUL instruction: it has a throughput of 3 and a latency of 6 (resp. 7) cycles. Sandy bridge will be much better with a throughput of 1 and a latency of 3 (resp. 4) cycles.

Regarding the much lesser number for GMP: I got that from their source code and it seems to me that it is a theoretical number. But what's sure is that it's a number that was calculated for an AMD K9 CPU. And from what I have read I gather the AMDs have a faster MUL unit than the (older) Intel chips.

like image 795
cxxl Avatar asked Nov 14 '11 16:11

cxxl


People also ask

What does MUL do in assembly?

The mul instruction multiplies the contents of general-purpose register (GPR) RA and GPR RB, and stores bits 0-31 of the result in the target GPR RT and bits 32-63 of the result in the MQ Register. The mul instruction has four syntax forms.

How do you use Mul?

The mul instruction has 2 operands: one is specified and the other one is implicit. When you write mul cx it means something like: ax = ax * cx . Actually it means dx:ax = ax * cx - the high half of the full 32-bit product is always written to dx . dx will be zero for small products where the result "fits" in ax .

What is mul bx?

MUL is used to multiply two 16-bit numbers. HLT is used to stop the program. AX is an accumulator which is used to store the result. BX, DX are general purpose registers where BX is used for multiplication and DX is used for result.


1 Answers

I once wrote a loop that looks rather like this, with a minimal amount of processing on a lot of data with the result that the loop was limited by memory speed.

I'd try prefetching a[i] and r[i]

if using gcc use the function __builtin_prefetch() or PREFETCHT0 instruction in assembler

http://gcc.gnu.org/onlinedocs/gcc-3.3.6/gcc/Other-Builtins.html

When this works the results can be dramatic. So long as the loop is a thousand or so iterations long, I'd prefetch a[i+64] and r[i+64] as a starting point and see how much difference it makes on your CPU. You may need to try larger prefetch distances.

like image 77
camelccc Avatar answered Sep 18 '22 14:09

camelccc