I am developing some scientific software for my university. It is being written in C++ on Windows (VS2008). The algorithm must calculate some values for a large number of matrix pairs, that is, at the core resides a loop iterating over the matrices, collecting some data, e.g.:
sumA = sumAsq = sumB = sumBsq = diffsum = diffsumsq = return = 0;
for (int y=0; y < height; ++y)
{
for (int x=0; x < width; ++x)
{
valA = matrixA(x,y);
valB = matrixB(x,y);
sumA+=valA;
sumAsq+=valA*valA;
sumB+=valB;
sumBsq+=valB*valB;
diffsum+=valA-valB;
diffsumsq+=(valA-valB)*(valA-valB);
}
}
return = sumA + sumB / sumAsq + sumBsq * diffsum * diffsumsq
This routine is executed millions of times for different matrixA, matrixB pairs. My problem is that this program is extremely slow, compiled in Release mode with all optimizations activated. Using the "pause-when-busy-and-inspect" debugger technique, I established that the program sits inside this loop virtually every time, even though, as you might expect, this routine is surrounded by a whole bunch of conditions and control branches. What puzzles me the most is that during its execution on a dual-processor Xeon-based system, the program utilizes one of the 4 cores (no surprise, it is single-threaded for now) but only up to about 25% of its limit, and with relatively large oscillations, where I would expect steady, 100% load until the program terminates.
The current version is actually a rewrite, created with optimizing the performance in mind. I was devastated when I found out it's actually slower than the original. The previous version used Boost matrices, which I replaced by OpenCV matrices, after having established them to be over 10 times faster in comparing the execution time of multiplying two 1000x100 matrices. I access the matrix by manually dereferencing a raw pointer to its data which I hoped would gain me some performance. I made the calculation routine a multi-line #define macro to enforce its inlining and to avoid function calls and returns. I improved the math behind the calculations so that the final value is calculated in a single pass through the matrices (the old version requires two passes). I expected to get huge gains and yet the opposite is true. I'm nowhere near my old program's efficiency, not to mention commercial software for the particular application.
I was wondering if it perhaps had something to do with the matrix data being 8-bit chars, I once saw that access to floats was actually slower than to doubles in my old program, perhaps chars are even slower since the processor retrieves data in 32-bit chunks (this Xeon probably grabs even 64bits). I also considered turning the matrices into vectors to avoid a loop-inside-loop construct, as well as some form of vectorization, like for example calculating the data for 4 (less? more?) consecutive matrix cells on a single loop iteration. Any other ideas please?
EDIT: actual code in the new, OpenCV based version:
const char *Aptr, *Bptr;
double sumA = 0, sumB = 0, sumAsq = 0, sumBsq = 0, diffsum = 0, diffsumsq = 0;
char Aval, Bval;
for (int y=0; y < height; ++y)
{
Aptr = (char*)(AMatrix.imageData + AMatrix.widthStep * y);
Bptr = (char*)(BMatrix.imageData + BMatrix.widthStep * y);
for (int x=0; x < width; ++x)
{
Aval = Aptr[x];
Bval = Bptr[x];
sumA+=Aval;
sumB+=Bval;
sumAsq+=Aval*Aval;
sumBsq+=Bval*Bval;
diffsum+=Aval-Bval;
diffsumsq+=(Aval-Bval)*(Aval-Bval);
}
}
The first step in optimizing a program is to eliminate unnecessary work, making the code perform its in- tended task as efficiently as possible. This includes eliminating unnecessary function calls, conditional tests, and memory references.
Various thoughts:
diffsum
within the loop? It looks as if you could do diffsum=sumA-sumB
once outside the loop -- but there may be numerical considerations that prevent you from doing this.Your inner loop is calling functions! No matter how trivial they are you pay a heavy penalty. You should try to linearize the matrix accesses (in essence make them 1D) so that you can access them with just pointer dereferencing
vala = *matrixA++;
valb = *matrixB++;
and since you are dong simple additions and subtractions look at SSE/SSE2 etc depending on your target CPU capabilities and your arithmetic (integer, floating point etc).
EDIT: MMX SSE2 intrinsics are functions that map one to one with CPU SIMD instructions. See these microsoft pages to get started and additionally I suggest looking at the Intel Site for the IA-32/ Intel64 programmers guides or similar manuals from AMD.
I also highly recommend this book on Optimization for Intel Architectures. This will explain all the hidden capabilities of your CPU and compiler..
Can you check the assembler code this loop is generating? If you only get a 25% of processor use, it may be that this loop is memory bound. There are about eight local variables there and I imagine the compiler is not mapping all of them to registers, so that there are many memory operations being done in each loop. One consideration would be tho write that loop in assembler.
Why do you walk the matrix column by column? Matrices will be stored in memory row after row, so if you access a whole column in the inner loop, you are probably requesting more memory loads to your different memory levels (caches and so on).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With