There is an aim to make thread pool which will support tasks with priority. So I need to write some data structure to support thread-safe priority queue. Of course, we can write big lock and use std::priority_queue. But this is not so efficient.
There was an idea to implement binary heap with concurrent extract (each element has its own spinlock and there is a global shared_mutex which is write-locked when we change heap size and read-locked when we heapify nodes, and when we swap and compare nodes we lock their spinlocks), but there are many potential deadlock abilities and I still don't know how to avoid them.
Are there any good data structures that can be more easily made thread-safe? Or are there any already implemented heaps which I can investigate?
You really should just implement the simplest thing you can find, protect it with a lock, and test it in your application. Unless you're hitting it thousands of times per second, the overhead of the lock will almost certainly be irrelevant to the performance of your application. This is especially true if your queue will be relatively small.
My suggestion would be to start with std::priority_queue, wrap a lock around it, and give it a shot.
If you really think you need a lock-free concurrent priority queue, look at Concurrent mutable priority queue.
Don't be so quick to assume that a lock-free priority queue will be faster than a mutex. As you've seen, lock-free structures of any significant complexity tend to be monumentally complex, involving a great number of atomic operations. And on modern processors, these atomic operations have become relatively much, much slower, due to the complexity of keeping the memory view coherent in a many-core CPU.
In this case, I would be gobsmacked if a simple spinlock around a simple binary heap were not much, much faster than a lock-free heap implementation, regardless of contention level.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With