Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to make a multiple-read/single-write lock from more basic synchronization primitives?

We have found that we have several spots in our code where concurrent reads of data protected by a mutex are rather common, while writes are rare. Our measurements seem to say that using a simple mutex seriously hinders the performance of the code reading that data. So what we would need is a multiple-read/single-write mutex. I know that this can be built atop of simpler primitives, but before I try myself at this, I'd rather ask for existing knowledge:

What is an approved way to build a multiple-read/single-write lock out of simpler synchronization primitives?

I do have an idea how to make it, but I'd rather have answers unbiased by what I (probably wrongly) came up with. (Note: What I expect is an explanation how to do it, probably in pseudo code, not a full-fledged implementation. I can certainly write the code myself.)

Caveats:

  • This needs to have reasonable performance. (What I have in mind would require two lock/unlock operations per access. Now that might not be good enough, but needing many of them instead seems unreasonable.)

  • Commonly, reads are more numerous, but writes are more important and performance-sensitive than reads. Readers must not starve writers.

  • We are stuck on a rather old embedded platform (proprietary variant of VxWorks 5.5), with a rather old compiler (GCC 4.1.2), and boost 1.52 – except for most of boost's parts relying on POSIX, because POSIX isn't fully implemented on that platform. The locking primitives available basically are several kind of semaphores (binary, counting etc.), on top of which we have already created mutexes, conditions variables, and monitors.

  • This is IA32, single-core.

like image 868
sbi Avatar asked Jan 09 '15 12:01

sbi


1 Answers

At first glance I thought I recognized this answer as the same algorithm that Alexander Terekhov introduced. But after studying it I believe that it is flawed. It is possible for two writers to simultaneously wait on m_exclusive_cond. When one of those writers wakes and obtains the exclusive lock, it will set exclusive_waiting_blocked = false on unlock, thus setting the mutex into an inconsistent state. After that, the mutex is likely hosed.

N2406, which first proposed std::shared_mutex contains a partial implementation, which is repeated below with updated syntax.

class shared_mutex {     mutex    mut_;     condition_variable gate1_;     condition_variable gate2_;     unsigned state_;      static const unsigned write_entered_ = 1U << (sizeof(unsigned)*CHAR_BIT - 1);     static const unsigned n_readers_ = ~write_entered_;  public:      shared_mutex() : state_(0) {}  // Exclusive ownership      void lock();     bool try_lock();     void unlock();  // Shared ownership      void lock_shared();     bool try_lock_shared();     void unlock_shared(); };  // Exclusive ownership  void shared_mutex::lock() {     unique_lock<mutex> lk(mut_);     while (state_ & write_entered_)         gate1_.wait(lk);     state_ |= write_entered_;     while (state_ & n_readers_)         gate2_.wait(lk); }  bool shared_mutex::try_lock() {     unique_lock<mutex> lk(mut_, try_to_lock);     if (lk.owns_lock() && state_ == 0)     {         state_ = write_entered_;         return true;     }     return false; }  void shared_mutex::unlock() {     {     lock_guard<mutex> _(mut_);     state_ = 0;     }     gate1_.notify_all(); }  // Shared ownership  void shared_mutex::lock_shared() {     unique_lock<mutex> lk(mut_);     while ((state_ & write_entered_) || (state_ & n_readers_) == n_readers_)         gate1_.wait(lk);     unsigned num_readers = (state_ & n_readers_) + 1;     state_ &= ~n_readers_;     state_ |= num_readers; }  bool shared_mutex::try_lock_shared() {     unique_lock<mutex> lk(mut_, try_to_lock);     unsigned num_readers = state_ & n_readers_;     if (lk.owns_lock() && !(state_ & write_entered_) && num_readers != n_readers_)     {         ++num_readers;         state_ &= ~n_readers_;         state_ |= num_readers;         return true;     }     return false; }  void shared_mutex::unlock_shared() {     lock_guard<mutex> _(mut_);     unsigned num_readers = (state_ & n_readers_) - 1;     state_ &= ~n_readers_;     state_ |= num_readers;     if (state_ & write_entered_)     {         if (num_readers == 0)             gate2_.notify_one();     }     else     {         if (num_readers == n_readers_ - 1)             gate1_.notify_one();     } } 

The algorithm is derived from an old newsgroup posting of Alexander Terekhov. It starves neither readers nor writers.

There are two "gates", gate1_ and gate2_. Readers and writers have to pass gate1_, and can get blocked in trying to do so. Once a reader gets past gate1_, it has read-locked the mutex. Readers can get past gate1_ as long as there are not a maximum number of readers with ownership, and as long as a writer has not gotten past gate1_.

Only one writer at a time can get past gate1_. And a writer can get past gate1_ even if readers have ownership. But once past gate1_, a writer still does not have ownership. It must first get past gate2_. A writer can not get past gate2_ until all readers with ownership have relinquished it. Recall that new readers can't get past gate1_ while a writer is waiting at gate2_. And neither can a new writer get past gate1_ while a writer is waiting at gate2_.

The characteristic that both readers and writers are blocked at gate1_ with (nearly) identical requirements imposed to get past it, is what makes this algorithm fair to both readers and writers, starving neither.

The mutex "state" is intentionally kept in a single word so as to suggest that the partial use of atomics (as an optimization) for certain state changes is a possibility (i.e. for an uncontended "fast path"). However that optimization is not demonstrated here. One example would be if a writer thread could atomically change state_ from 0 to write_entered then he obtains the lock without having to block or even lock/unlock mut_. And unlock() could be implemented with an atomic store. Etc. These optimizations are not shown herein because they are much harder to implement correctly than this simple description makes it sound.

like image 71
Howard Hinnant Avatar answered Oct 07 '22 22:10

Howard Hinnant