Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Faster abs-max of float array

I need to draw peak meters for audio in realtime. Minimum 44100 samples per second times a minimum 40 streams. Each buffer is between 64 and 1024 samples. I need to grab the abs max from each buffer. (These are then fed through a kind of lowpass filter and drawn at about 20ms intervals.)

for(int i = 0; i < numSamples; i++)
{ 
      absMaxOfBuffer = MAX( fabs( buffer[i] ), absMaxOfBuffer);
}

That's how I do it now. I'd like to do it much faster. The buffers have floats in the -1 to 1 range, hence the fabs.

Question, is there some tricky comp-sci quicksort-esque way of doing this faster?

Failing that, branchless ABS and MAX functions for floats, do they exist?

edit: Primary platform is Linux/gcc but a windows port is planned (probably with mingw).

edit, the second:
I gave the accept to onebyone because of the bit regarding the actual algo structure which was central to the question.
I'll try unrolling the loop to four at the time, zeroing the signbits and then getting the max with SSE (maxps instruction) and see if that doesn't peel the banana. Thanks for the suggestions, I've up-voted a few of you, as runners up. :)

like image 824
Eric_DE Avatar asked May 03 '09 17:05

Eric_DE


4 Answers

fabs and comparison are both really fast for IEEE floats (like, single-integer-op fast in principle).

If the compiler isn't inlining both operations, then either poke it until it does, or find the implementation for your architecture and inline it yourself.

You can maybe get something out of the fact that positive IEEE floats go in the same order as the integers with the same bit patterns. That is,

f > g   iff   *(int*)&f > *(int*)&g

So once you've fabs'ed, I think that a branch-free max for int will also work for float (assuming they're the same size of course). There's an explanation of why this works here: http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm. But your compiler already knows all this, as does your CPU, so it may not make any difference.

There is no complexity-faster way of doing it. Your algorithm is already O(n), and you can't beat that and still look at every sample.

I guess there's probably something in your processor's SIMD (that is, SSE2 on Intel) that would help, by processing more data per clock cycle than your code. But I don't know what. If there is, then it quite possibly will be several times faster.

You could probably parallelize on a multi-core CPU, especially since you're dealing with 40 independent streams anyway. That will be at best a few factors faster. "Just" launch the appropriate number of extra threads, split the work between them, and use the lightest-weight primitive you can to indicate when they're all complete (maybe a thread barrier). I'm not quite clear whether you're plotting the max of all 40 streams, or the max of each separately, so maybe you don't actually need to synchronise the worker threads, other than to ensure results are delivered to the next stage without data corruption.

It's probably worth taking a look at the disassembly to see how much the compiler has unrolled the loop. Try unrolling it a bit more, see if that makes any difference.

Another thing to think about is how many cache misses you are getting, and whether it's possible to reduce the number by giving the cache a few clues so it can load the right pages ahead of time. But I have no experience with this, and I wouldn't hold out much hope. __builtin_prefetch is the magic incantation on gcc, and I guess the first experiment would be something like "prefetch the beginning of the next block before entering the loop for this block".

What percentage of the required speed are you currently at? Or is it a case of "as fast as possible"?

like image 105
Steve Jessop Avatar answered Sep 29 '22 14:09

Steve Jessop


There is a branchless fabs documented on http://www.scribd.com/doc/2348628/The-Aggregate-Magic-Algorithms

Please also note that recent versions of GCC will inline a branchless fabs for you, using MMX instructions. There is also fmin and fmax, but GCC won't inline those (you'll get a call fmin).

like image 33
pts Avatar answered Sep 29 '22 14:09

pts


Things to try:

  • fabs() might not be an inline function.
  • Special assembly instructions might help. On Intel, SSE has an instruction to compute the maximum of four floats at once.
  • Failing that, the IEEE 754 specification is such that if a and b are non-negative floats, then a < b is equivalent to *(int *)&a < *(int *)&b. Moreover, for any a, you can compute -a from a by flipping the MSB. Together, these properties might enable some bit-twiddling hacks.
  • Do you really need the maximum of every sample? Perhaps the maximum might occur more than once, opening up the possibility of not examining every input.
  • Can you compute the maximum in a streaming fashion?
like image 21
Dave Avatar answered Sep 29 '22 15:09

Dave


You may want to look at Eigen.

It is a C++ template library that uses SSE (2 and later) and AltiVec instruction sets with graceful fallback to non-vectorized code.

Fast. (See benchmark).
Expression templates allow to intelligently remove temporaries and enable lazy evaluation, when that is appropriate -- Eigen takes care of this automatically and handles aliasing too in most cases.
Explicit vectorization is performed for the SSE (2 and later) and AltiVec instruction sets, with graceful fallback to non-vectorized code. Expression templates allow to perform these optimizations globally for whole expressions.
With fixed-size objects, dynamic memory allocation is avoided, and the loops are unrolled when that makes sense.
For large matrices, special attention is paid to cache-friendliness.

like image 44
lothar Avatar answered Sep 29 '22 14:09

lothar