I am trying to optimize some parts of OpenCV code using NEON. Here is the original code block I work on. (Note: If it is of any importance, you can find the full source at "opencvfolder/modules/video/src/lkpyramid.cpp". It is an implementation of an object tracking algorithm.)
for( ; x < colsn; x++ )
{
deriv_type t0 = (deriv_type)(trow0[x+cn] - trow0[x-cn]);
deriv_type t1 = (deriv_type)((trow1[x+cn] + trow1[x-cn])*3 + trow1[x]*10);
drow[x*2] = t0; drow[x*2+1] = t1;
}
In this code, size of deriv_type is a 2 byte. And here is the NEON assembly I have written. With original code I measure 10-11 fps. With NEON it is worse, I can only get 5-6 fps. I don't really know much about NEON, probably there are lots of mistakes in this code. Where am I doing wrong? Thanks
for( ; x < colsn; x+=4 )
{
__asm__ __volatile__(
"vld1.16 d2, [%2] \n\t" // d2 = trow0[x+cn]
"vld1.16 d3, [%3] \n\t" // d3 = trow0[x-cn]
"vsub.i16 d9, d2, d3 \n\t" // d9 = d2 - d3
"vld1.16 d4, [%4] \n\t" // d4 = trow1[x+cn]
"vld1.16 d5, [%5] \n\t" // d5 = trow1[x-cn]
"vld1.16 d6, [%6] \n\t" // d6 = trow1[x]
"vmov.i16 d7, #3 \n\t" // d7 = 3
"vmov.i16 d8, #10 \n\t" // d8 = 10
"vadd.i16 d4, d4, d5 \n\t" // d4 = d4 + d5
"vmul.i16 d10, d4, d7 \n\t" // d10 = d4 * d7
"vmla.i16 d10, d6, d8 \n\t" // d10 = d10 + d6 * d8
"vst2.16 {d9,d10}, [%0] \n\t" // drow[x*2] = d9; drow[x*2+1] = d10;
//"vst1.16 d4, [%1] \n\t"
: //output
:"r"(drow+x*2), "r"(drow+x*2+1), "r"(trow0+x+cn), "r"(trow0+x-cn), "r"(trow1+x+cn), "r"(trow1+x-cn), "r"(trow1) //input
:"d2", "d3", "d4", "d5", "d6", "d7", "d8", "d9", "d10" //registers
);
}
EDIT
This is the verison with intrinsics. It is almost the same with before. It still works slow.
const int16x8_t vk3 = { 3, 3, 3, 3, 3, 3, 3, 3 };
const int16x8_t vk10 = { 10, 10, 10, 10, 10, 10, 10, 10 };
for( ; x < colsn; x+=8 )
{
int16x8x2_t loaded;
int16x8_t t0a = vld1q_s16(&trow0[x + cn]);
int16x8_t t0b = vld1q_s16(&trow0[x - cn]);
loaded.val[0] = vsubq_s16(t0a, t0b); // t0 = (trow0[x + cn] - trow0[x - cn])
loaded.val[1] = vld1q_s16(&trow1[x + cn]);
int16x8_t t1b = vld1q_s16(&trow1[x - cn]);
int16x8_t t1c = vld1q_s16(&trow1[x]);
loaded.val[1] = vaddq_s16(loaded.val[1], t1b);
loaded.val[1] = vmulq_s16(loaded.val[1], vk3);
loaded.val[1] = vmlaq_s16(loaded.val[1], t1c, vk10);
}
You're creating a lot of pipeline stalls due to data hazards. For example these three instructions:
"vadd.i16 d4, d4, d5 \n\t" // d4 = d4 + d5
"vmul.i16 d10, d4, d7 \n\t" // d10 = d4 * d7
"vmla.i16 d10, d6, d8 \n\t" // d10 = d10 + d6 * d8
They each only take 1 instruction to issue, but there are several-cycle stalls between them because the results are not ready (NEON instruction scheduling).
Try unrolling the loop a few times and interleaving their instructions. The compiler might do this for you if you use intrinsics. It's not impossible to beat the compiler at instructions scheduling etc, but it is quite hard and not often worth it (this might fall under not optimizing prematurely).
EDIT
Your intrinsic code is reasonable, I suspect the compiler is just not doing a very good job. Take a look at the assembly code it's producing (objdump -d
) and you will probably see that it's also creating a lot of pipeline hazards. A later version of the compiler may help, but if it doesn't you might have to modify the loop yourself to hide the latency of the results (you will need the instruction timings). Keep the current code around, as it is correct and should be optimisable by a clever compiler.
You might end up with something like:
// do step 1 of first iteration
// ...
for (int i = 0; i < n - 1; i++) {
// do step 1 of (i+1)th
// do step 2 of (i)th
// with their instructions interleaved
// ...
}
// do step 2 of (n-1)th
// ...
You can also split the loop into more than 2 steps, or unroll the loop a few times (e.g. change i++
to i+=2
, double the body of the loop, changing i
to i+1
in the second half). I hope this answer helps, let me know if anything is unclear!
There is some loop invariant stuff there that needs to be moved outside the for loop - this may help a little.
You could also consider using full width SIMD operations, so that you can process 8 ppints per loop iteration rather than 4.
Most importantly though, you should probably be using intrinsics rather than raw asm, so that the compiler can take care of peephole optimisation, register allocation, instruction scheduling, loop unrolling, etc.
E.g.
// constants - init outside loop
const int16x8_t vk3 = { 3, 3, 3, 3, 3, 3, 3, 3 };
const int16x8_t vk10 = { 10, 10, 10, 10, 10, 10, 10, 10 };
for( ; x < colsn; x += 8)
{
int16x8_t t0a = vld1q_s16(&trow0[x + cn]);
int16x8_t t0b = vld1q_s16(&trow0[x - cn]);
int16x8_t t0 = vsubq_s16(t0a, t0b); // t0 = (trow0[x + cn] - trow0[x - cn])
// ...
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With