Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How are read/write locks implemented in pthread?

Tags:

c

linux

pthreads

How are they implemented especially in case of pthreads. What pthread synchronization APIs do they use under the hood? A little bit of pseudocode would be appreciated.

like image 778
pythonic Avatar asked Jun 14 '12 11:06

pythonic


People also ask

How do read/write locks work?

An RW lock allows concurrent access for read-only operations, write operations require exclusive access. This means that multiple threads can read the data in parallel but an exclusive lock is needed for writing or modifying data.

What is read/write lock Pthread?

In many situations, data is read more often than it is modified or written. In these cases, you can allow threads to read concurrently while holding the lock and allow only one thread to hold the lock when data is modified. A multiple-reader single-writer lock (or read/write lock) does this.

What is read locking?

Once a row is read locked, no other transaction can obtain a write lock on it. Acquiring a read lock ensures that a different transaction does not modify or delete a row while it is being read.

Why do we need read locks?

A read lock allows multiple concurrent readers of some data, but it prevents readers from accessing the data while a writer is in the middle of changing it. That ensures that a reader will never see a partial update (a state where the writer has updated some parts of the data but not all of them.


1 Answers

I haven't done any pthreads programming for a while, but when I did, I never used POSIX read/write locks. The problem is that most of the time a mutex will suffice: ie. your critical section is small, and the region isn't so performance critical that the double barrier is worth worrying about.

In those cases where performance is an issue, normally using atomic operations (generally available as a compiler extension) are a better option (ie. the extra barrier is the problem, not the size of the critical section).

By the time you eliminate all these cases, you are left with cases where you have specific performance/fairness/rw-bias requirements that require a true rw-lock; and that is when you discover that all the relevant performance/fairness parameters of POSIX rw-lock are undefined and implementation specific. At this point you are generally better off implementing your own so you can ensure the appropriate fairness/rw-bias requirements are met.

The basic algorithm is to keep a count of how many of each are in the critical section, and if a thread isn't allowed access yet, to shunt it off to an appropriate queue to wait. Most of your effort will be in implementing the appropriate fairness/bias between servicing the two queues.

The following C-like pthreads-like pseudo-code illustrates what I'm trying to say.

struct rwlock {
  mutex admin; // used to serialize access to other admin fields, NOT the critical section.
  int count; // threads in critical section +ve for readers, -ve for writers.
  fifoDequeue dequeue; // acts like a cond_var with fifo behaviour and both append and prepend operations.
  void *data; // represents the data covered by the critical section.
}

void read(struct rwlock *rw, void (*readAction)(void *)) {
  lock(rw->admin);
  if (rw->count < 0) {
    append(rw->dequeue, rw->admin);
  }
  while (rw->count < 0) {
    prepend(rw->dequeue, rw->admin); // Used to avoid starvation.
  }
  rw->count++;
  // Wake the new head of the dequeue, which may be a reader.
  // If it is a writer it will put itself back on the head of the queue and wait for us to exit.
  signal(rw->dequeue); 
  unlock(rw->admin);

  readAction(rw->data);

  lock(rw->admin);
  rw->count--;
  signal(rw->dequeue); // Wake the new head of the dequeue, which is probably a writer.
  unlock(rw->admin);
}

void write(struct rwlock *rw, void *(*writeAction)(void *)) {
  lock(rw->admin);
  if (rw->count != 0) {
    append(rw->dequeue, rw->admin);
  }
  while (rw->count != 0) {
    prepend(rw->dequeue, rw->admin);
  }
  rw->count--;
  // As we only allow one writer in at a time, we don't bother signaling here.
  unlock(rw->admin);

  // NOTE: This is the critical section, but it is not covered by the mutex!
  //       The critical section is rather, covered by the rw-lock itself.
  rw->data = writeAction(rw->data);

  lock(rw->admin);
  rw->count++;
  signal(rw->dequeue);
  unlock(rw->admin);
}

Something like the above code is a starting point for any rwlock implementation. Give some thought to the nature of your problem and replace the dequeue with the appropriate logic that determines which class of thread should be woken up next. It is common to allow a limited number/period of readers to leapfrog writers or visa versa depending on the application.

Of course my general preference is to avoid rw-locks altogether; generally by using some combination of atomic operations, mutexes, STM, message-passing, and persistent data-structures. However there are times when what you really need is a rw-lock, and when you do it is useful to know how they work, so I hope this helped.

EDIT - In response to the (very reasonable) question, where do I wait in the pseudo-code above:

I have assumed that the dequeue implementation contains the wait, so that somewhere within append(dequeue, mutex) or prepend(dequeue, mutex) there is a block of code along the lines of:

while(!readyToLeaveQueue()) {
  wait(dequeue->cond_var, mutex);
}

which was why I passed in the relevant mutex to the queue operations.

like image 197
Recurse Avatar answered Oct 25 '22 02:10

Recurse