Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I optimize a calculation-intensive C++ program with a known bottleneck?

I am developing some scientific software for my university. It is being written in C++ on Windows (VS2008). The algorithm must calculate some values for a large number of matrix pairs, that is, at the core resides a loop iterating over the matrices, collecting some data, e.g.:

sumA = sumAsq = sumB = sumBsq = diffsum = diffsumsq = return = 0;
for (int y=0; y < height; ++y)
{
    for (int x=0; x < width; ++x)
    { 
        valA = matrixA(x,y);
        valB = matrixB(x,y);
        sumA+=valA;
        sumAsq+=valA*valA;
        sumB+=valB;
        sumBsq+=valB*valB;
        diffsum+=valA-valB;
        diffsumsq+=(valA-valB)*(valA-valB);
    }
}
return = sumA + sumB / sumAsq + sumBsq * diffsum * diffsumsq

This routine is executed millions of times for different matrixA, matrixB pairs. My problem is that this program is extremely slow, compiled in Release mode with all optimizations activated. Using the "pause-when-busy-and-inspect" debugger technique, I established that the program sits inside this loop virtually every time, even though, as you might expect, this routine is surrounded by a whole bunch of conditions and control branches. What puzzles me the most is that during its execution on a dual-processor Xeon-based system, the program utilizes one of the 4 cores (no surprise, it is single-threaded for now) but only up to about 25% of its limit, and with relatively large oscillations, where I would expect steady, 100% load until the program terminates.

The current version is actually a rewrite, created with optimizing the performance in mind. I was devastated when I found out it's actually slower than the original. The previous version used Boost matrices, which I replaced by OpenCV matrices, after having established them to be over 10 times faster in comparing the execution time of multiplying two 1000x100 matrices. I access the matrix by manually dereferencing a raw pointer to its data which I hoped would gain me some performance. I made the calculation routine a multi-line #define macro to enforce its inlining and to avoid function calls and returns. I improved the math behind the calculations so that the final value is calculated in a single pass through the matrices (the old version requires two passes). I expected to get huge gains and yet the opposite is true. I'm nowhere near my old program's efficiency, not to mention commercial software for the particular application.

I was wondering if it perhaps had something to do with the matrix data being 8-bit chars, I once saw that access to floats was actually slower than to doubles in my old program, perhaps chars are even slower since the processor retrieves data in 32-bit chunks (this Xeon probably grabs even 64bits). I also considered turning the matrices into vectors to avoid a loop-inside-loop construct, as well as some form of vectorization, like for example calculating the data for 4 (less? more?) consecutive matrix cells on a single loop iteration. Any other ideas please?

EDIT: actual code in the new, OpenCV based version:

const char *Aptr, *Bptr;
double sumA = 0, sumB = 0, sumAsq = 0, sumBsq = 0, diffsum = 0, diffsumsq = 0;
char Aval, Bval;

for (int y=0; y < height; ++y)
{
    Aptr = (char*)(AMatrix.imageData + AMatrix.widthStep * y);
    Bptr = (char*)(BMatrix.imageData + BMatrix.widthStep * y);
    for (int x=0; x < width; ++x)
    {
        Aval = Aptr[x];
        Bval = Bptr[x];

        sumA+=Aval;
        sumB+=Bval;
        sumAsq+=Aval*Aval;
        sumBsq+=Bval*Bval;
        diffsum+=Aval-Bval;
        diffsumsq+=(Aval-Bval)*(Aval-Bval);
    }
}
like image 468
neuviemeporte Avatar asked Jul 29 '10 10:07

neuviemeporte


People also ask

How do you optimize software performance?

The first step in optimizing a program is to eliminate unnecessary work, making the code perform its in- tended task as efficiently as possible. This includes eliminating unnecessary function calls, conditional tests, and memory references.


3 Answers

Various thoughts:

  • You say that you're only managing to achieve a CPU load of about 25%. I can think of two reasons for this:
    1. You're swapping. What is the size of your matrices? Do they fit entirely in physical memory? Look at your application's memory usage and working set size.
    2. The rest of your application's code is blocking on I/O. Does the code that surrounds your core routine do any I/O? It could be blocking there for large stretches of time, but of course you're not seeing that using the "pause-when-busy-and-inspect" technique because whenever the process unblocks again, it returns straight into your compute-intensive core routine.
  • Take a look at the assembly code for your core routine. Does it look reasonable?
  • Do you actually need to compute diffsum within the loop? It looks as if you could do diffsum=sumA-sumB once outside the loop -- but there may be numerical considerations that prevent you from doing this.
  • As renick has already commented, this looks like a prime target for SSE optimization. Again, you should make sure the compiler is generating reasonable assembly code (if you're using intrinsics and not writing the assembly yourself).
  • If you don't want to write SSE code yourself, at least make sure that your compiler's SSE flag is set. This will allow the compiler to use the SSE unit instead of the FPU for scalar floating-point operations, which will by itself will improve performance because the stack-based FPU on the x86 is notoriously ill-suited to compiler code generation.
like image 70
Martin B Avatar answered Nov 08 '22 22:11

Martin B


Your inner loop is calling functions! No matter how trivial they are you pay a heavy penalty. You should try to linearize the matrix accesses (in essence make them 1D) so that you can access them with just pointer dereferencing

vala = *matrixA++; 
valb = *matrixB++; 

and since you are dong simple additions and subtractions look at SSE/SSE2 etc depending on your target CPU capabilities and your arithmetic (integer, floating point etc).

EDIT: MMX SSE2 intrinsics are functions that map one to one with CPU SIMD instructions. See these microsoft pages to get started and additionally I suggest looking at the Intel Site for the IA-32/ Intel64 programmers guides or similar manuals from AMD.

I also highly recommend this book on Optimization for Intel Architectures. This will explain all the hidden capabilities of your CPU and compiler..

like image 3
renick Avatar answered Nov 08 '22 20:11

renick


Can you check the assembler code this loop is generating? If you only get a 25% of processor use, it may be that this loop is memory bound. There are about eight local variables there and I imagine the compiler is not mapping all of them to registers, so that there are many memory operations being done in each loop. One consideration would be tho write that loop in assembler.

Why do you walk the matrix column by column? Matrices will be stored in memory row after row, so if you access a whole column in the inner loop, you are probably requesting more memory loads to your different memory levels (caches and so on).

like image 2
rturrado Avatar answered Nov 08 '22 22:11

rturrado