What intrinsics would I use to vectorize the following(if it's even possible to vectorize) on the x86_64?
double myNum = 0;
for(int i=0;i<n;i++){
myNum += a[b[i]] * c[i]; //b[i] = int, a[b[i]] = double, c[i] = double
}
Intel processors can issue two floating point operations but one load per cycle, so memory accesses are the tightest constraint. With that in mind, I aimed first to use packed loads to reduce the number of load instructions, and used packed arithmetic just because it was convenient. I've since realized that saturating memory bandwidth may be the biggest issue, and all the messing around with SSE instructions might have been premature optimization if the point was to make the code go fast rather than learn to vectorize.
The fewest possible loads with no assumption on the indices in b
requires unrolling the loop four times. One 128 bit load gets four indices from b
, two 128 bit loads each get a pair of adjacent doubles from c
, and gathering a
required independent 64-bit loads.
That's a floor of 7 cycles per four iterations for serial code. (enough to saturate my memory bandwidth if access to a
doesn't cache nicely). I've left out some annoying things like handling a number of iterations which is not a multiple of 4.
entry: ; (rdi,rsi,rdx,rcx) are (n,a,b,c)
xorpd xmm0, xmm0
xor r8, r8
loop:
movdqa xmm1, [rdx+4*r8]
movapd xmm2, [rcx+8*r8]
movapd xmm3, [rcx+8*r8+8]
movd r9, xmm1
movq r10, xmm1
movsd xmm4, [rsi+8*r9]
shr r10, 32
movhpd xmm4, [rsi+8*r10]
punpckhqdq xmm1, xmm1
movd r9, xmm1
movq r10, xmm1
movsd xmm5, [rsi+8*r9]
shr r10, 32
movhpd xmm5, [rsi+8*r10]
add r8, 4
cmp r8, rdi
mulpd xmm2, xmm4
mulpd xmm3, xmm5
addpd xmm0, xmm2
addpd xmm0, xmm3
jl loop
Getting the indices out is the most complicated part. movdqa
loads 128 bits of integer data from a 16 byte aligned address (Nehalem has latency penalties for mixing the "integer" and "float" SSE instructions). punpckhqdq
moves high 64 bits to low 64 bits, but in integer mode unlike the more simply named movhlpd
. 32 bit shifts are done in the general purpose registers. movhpd
loads one double into the upper part of an xmm register without disturbing the lower part - this is used to load the elements of a
directly into packed registers.
This code distinctly faster than the code above, which is in turn faster than the simple code, and on every access pattern but the simple case B[i] = i
where the naive loop is actually fastest. I also tried a few thing like a function around SUM(A(B(:)),C(:))
in Fortran which ended up basically equivalent to the simple loop.
I tested on a Q6600 (65 nm Core 2 at 2.4Ghz) with 4GB of DDR2-667 memory, in 4 modules. Testing memory bandwidth gives about 5333 MB/s, so it seems like I'm only seeing a single channel. I'm compiling with Debian's gcc 4.3.2-1.1, -O3 -Ffast-math -msse2 -Ftree-vectorize -std=gnu99.
For testing I'm letting n
be one million, initializing the arrays so a[b[i]]
and c[i]
both equal 1.0/(i+1)
, with a few different patterns of indices. One allocates a
with a million elements and sets b
to a random permutation, another allocates a
with 10M elements and uses every 10th, and the last allocates a
with 10M elements and sets up b[i+1]
by adding a random number from 1 to 9 to b[i]
. I'm timing how long one call takes with gettimeofday
, clearing the caches by calling clflush
over the arrays, and measuring 1000 trials of each function. I plotted smoothed runtime distributions using some code from the guts of criterion (in particular, the kernel density estimator in the statistics
package).
Now, for the actual important note about bandwidth. 5333MB/s with 2.4Ghz clock is just over two bytes per cycle. My data is long enough that nothing should be cacheable, and multiplying the runtime of my loop by (16+2*16+4*64) bytes loaded per iteration if everything misses gives me almost exactly the ~5333MB/s bandwidth my system has. It should be pretty easy to saturate that bandwidth without SSE. Even assuming a
were completely cached, just reading b
and c
for one iteration moves 12 bytes of data, and the naive can start a new iteration ever third cycle with pipelining.
Assuming anything less than complete caching on a
makes arithmetic and instruction count even less of a bottleneck. I wouldn't be surprised if most of the speedup in my code comes from issuing fewer loads to b
and c
so more space is free to track and speculate past cache misses on a
.
Wider hardware might make more difference. A Nehalem system running three channels of DDR3-1333 would need to move 3*10667/2.66 = 12.6 bytes per cycle to saturate memory bandwidth. That would be impossible for a single thread if a
fits in cache - but at 64 bytes a line cache misses on the vector add up quickly - just one of the four loads in my loop missing in caches brings up the average required bandwidth to 16 bytes/cycle.
Here's my go at it, fully optimized and tested:
#include <emmintrin.h>
__m128d sum = _mm_setzero_pd();
for(int i=0; i<n; i+=2) {
sum = _mm_add_pd(sum, _mm_mul_pd(
_mm_loadu_pd(c + i),
_mm_setr_pd(a[b[i]], a[b[i+1]])
));
}
if(n & 1) {
sum = _mm_add_pd(sum, _mm_set_sd(a[b[n-1]] * c[n-1]));
}
double finalSum = _mm_cvtsd_f64(_mm_add_pd(
sum, _mm_shuffle_pd(sum, sum, _MM_SHUFFLE2(0, 1))
));
This produces very beautiful assembly code using gcc -O2 -msse2
(4.4.1).
As you can tell, having an even n
will make this loop go faster as well as an aligned c
. If you can align c
, change _mm_loadu_pd
to _mm_load_pd
for an even faster execution times.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With