I'm looking for a single-producer, single-consumer FIFO implementation that would perform faster than the normal lock-write-unlock-signal / waitForSignal-lock-read-unlock stuff. I'm looking for something supported by most POSIX operating systems (x86 specific is fine) written in either C or C++.
I'm not looking to pass anything larger than a pointer.
I'm not necessarily attached to the lock-free idea, but I do want something fast and correct. One of the papers I read on the subject mentioned a two-queue approach that seemed interesting, but I haven't been able to find much about that since then.
From the research I've done so far, 0mq (which supposedly uses a lock-free structure for its inproc:// scheme) looks like it's the most attractive option. That being said, I'd like to be sure I haven't missed anything before I go down that path.
One other alternative might involve using a POSIX message queue, but this seems like it'd be rather slow for thread <--> thread communication; is this true?
Any single-consumer single-producer lock free queue implementation in C? seems relevant, but the accepted answer there really isn't an enumeration of existing libraries as much as it is "premature optimization is bad".
You will want to look at Intel's Thread Building Blocks. They build on the primitives provided by x86's user-mode atomic operations, and pthreads or Win32 threads, and provide fast, efficient, templated data structures. A concurrent queue is among many.
In addition to the other answers here (and in this highly related question), I'll take this opportunity for a shameless plug of my own super-fast, C++ implementation of a single-consumer single-producer wait-free queue. It:
It's available on GitHub under the simplified BSD license (feel free to fork it!).
A comparable queue is Facebook's Folly queue, which can be ever-so-slightly faster, but does not support growing as needed (it has a fixed size).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With