Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to give priority to privileged thread in mutex locking?

First of all: I am completely a newbie in mutex/multithread programming, so sorry for any error in advance...

I have a program that runs multiple threads. The threads (usually one per cpu core) do a lot of calculation and "thinking" and then sometimes they decide to call a particular (shared) method that updates some statistics. The concurrency on statistics updates is managed through the use of a mutex:

stats_mutex.lock(); common_area->update_thread_stats( ... ); stats_mutex.unlock(); 

Now to the problem. Of all those threads there is one particular thread that need almost
realtime priority, because it's the only thread that actually operates.

With "almost realtime priority" I mean:

Let's suppose thread t0 is the "privileged one" and t1....t15 are the normal ones.What happens now is:

  • Thread t1 acquires lock.
  • Thread t2, t3, t0 call the lock() method and wait for it to succeed.
  • Thread t1 calls unlock()
  • One (at random, as far as i know) of the threads t2, t3, t0 succeeds in acquiring the lock, and the other ones continue to wait.

What I need is:

  • Thread t1 acquire lock.
  • Thread t2, t3, t0 call the lock() method and wait for it to succeed.
  • Thread t1 calls unlock()
  • Thread t0 acquires lock since it's privileged

So, what's the best (possibly simplest) method to do this thing?

What I was thinking is to have a bool variable called "privileged_needs_lock".

But I think I need another mutex to manage access to this variable... I dont know if this is the right way...

Additional info:

  • my threads use C++11 (as of gcc 4.6.3)
  • code needs to run on both Linux and Windows (but tested only on Linux at the moment).
  • performance on locking mechanism is not an issue (my performance problem are in internal thread calculations, and thread number will always be low, one or two per cpu core at maximum)

Any idea is appreciated. Thanks


The below solution works (three mutex way):

#include <thread> #include <iostream> #include <mutex> #include "unistd.h"  std::mutex M; std::mutex N; std::mutex L;  void lowpriolock(){   L.lock();   N.lock();   M.lock();   N.unlock(); }  void lowpriounlock(){   M.unlock();   L.unlock(); }  void highpriolock(){   N.lock();   M.lock();   N.unlock(); }  void highpriounlock(){   M.unlock(); }  void hpt(const char* s){   using namespace std;   //cout << "hpt trying to get lock here" << endl;   highpriolock();   cout << s << endl;   sleep(2);   highpriounlock(); }  void lpt(const char* s){   using namespace std;   //cout << "lpt trying to get lock here" << endl;   lowpriolock();   cout << s << endl;   sleep(2);   lowpriounlock(); }  int main(){ std::thread t0(lpt,"low prio t0 working here"); std::thread t1(lpt,"low prio t1 working here"); std::thread t2(hpt,"high prio t2 working here"); std::thread t3(lpt,"low prio t3 working here"); std::thread t4(lpt,"low prio t4 working here"); std::thread t5(lpt,"low prio t5 working here"); std::thread t6(lpt,"low prio t6 working here"); std::thread t7(lpt,"low prio t7 working here"); //std::cout << "All threads created" << std::endl; t0.join(); t1.join(); t2.join(); t3.join(); t4.join(); t5.join(); t6.join(); t7.join(); return 0; } 

Tried the below solution as suggested but it does not work (compile with " g++ -std=c++0x -o test test.cpp -lpthread"):

#include <thread> #include <mutex>  #include "time.h" #include "pthread.h"  std::mutex l;  void waiter(){   l.lock();   printf("Here i am, waiter starts\n");   sleep(2);   printf("Here i am, waiter ends\n");   l.unlock(); }  void privileged(int id){   usleep(200000);   l.lock();   usleep(200000);   printf("Here i am, privileged (%d)\n",id);   l.unlock();   }  void normal(int id){   usleep(200000);   l.lock();   usleep(200000);   printf("Here i am, normal (%d)\n",id);   l.unlock();     }  int main(){   std::thread tw(waiter);   std::thread t1(normal,1);   std::thread t0(privileged,0);   std::thread t2(normal,2);    sched_param sch;   int policy;     pthread_getschedparam(t0.native_handle(), &policy, &sch);   sch.sched_priority = -19;   pthread_setschedparam(t0.native_handle(), SCHED_FIFO, &sch);    pthread_getschedparam(t1.native_handle(), &policy, &sch);   sch.sched_priority = 18;   pthread_setschedparam(t1.native_handle(), SCHED_FIFO, &sch);    pthread_getschedparam(t2.native_handle(), &policy, &sch);   sch.sched_priority = 18;   pthread_setschedparam(t2.native_handle(), SCHED_FIFO, &sch);      tw.join();   t1.join();   t0.join();   t2.join();    return 0;   } 
like image 268
d3k Avatar asked Jul 26 '12 09:07

d3k


People also ask

What is a priority lock?

Often one wishes to use threads within a program with differing intrinsic priorities. For example, a thread which deals with the user interface should have a lower latency compared to a compute thread, so that the user experience is improved.

Does mutex unlock order matter?

No, only the order of acquisition is important. As long as you hold them, you can release Mutexes in any order. It may be more "efficient" if work can be done somewhere else with only one of the Mutexes to have a specific order of release, but it's still deadlock-free.

What happens when mutex lock?

Mutexes are used to protect shared resources. If the mutex is already locked by another thread, the thread waits for the mutex to become available. The thread that has locked a mutex becomes its current owner and remains the owner until the same thread has unlocked it.

Can multiple threads lock on a mutex?

The non-member function lock allows to lock more than one mutex object simultaneously, avoiding the potential deadlocks that can happen when multiple threads lock/unlock individual mutex objects in different orders.


2 Answers

I can think of three methods using only threading primitives:

Triple mutex

Three mutexes would work here:

  • data mutex ('M')
  • next-to-access mutex ('N'), and
  • low-priority access mutex ('L')

Access patterns are:

  • Low-priority threads: lock L, lock N, lock M, unlock N, { do stuff }, unlock M, unlock L
  • High-priority thread: lock N, lock M, unlock N, { do stuff }, unlock M

That way the access to the data is protected, and the high-priority thread can get ahead of the low-priority threads in access to it.

Mutex, condition variable, atomic flag

The primitive way to do this is with a condition variable and an atomic:

  • Mutex M;
  • Condvar C;
  • atomic bool hpt_waiting;

Data access patterns:

  • Low-priority thread: lock M, while (hpt_waiting) wait C on M, { do stuff }, broadcast C, unlock M
  • High-priority thread: hpt_waiting := true, lock M, hpt_waiting := false, { do stuff }, broadcast C, unlock M

Mutex, condition variable, two non-atomic flag

Alternatively you can use two non-atomic bools with a condvar; in this technique the mutex/condvar protects the flags, and the data is protected not by a mutex but by a flag:

  • Mutex M;

  • Condvar C;

  • bool data_held, hpt_waiting;

  • Low-priority thread: lock M, while (hpt_waiting or data_held) wait C on M, data_held := true, unlock M, { do stuff }, lock M, data_held := false, broadcast C, unlock M

  • High-priority thread: lock M, hpt_waiting := true, while (data_held) wait C on M, data_held := true, unlock M, { do stuff }, lock M, data_held := false, hpt_waiting := false, broadcast C, unlock M

like image 162
ecatmur Avatar answered Sep 20 '22 22:09

ecatmur


Put requesting threads on a 'priority queue'. The privileged thread can get first go at the data when it's free.

One way to do this would be withan array of ConcurrentQueues[privilegeLevel], a lock and some events.

Any thread that wants at the data enters the lock. If the data is free, (boolean), it gets the data object and exits the lock. If the data is in use by another thread, the requesting thread pushes an event onto one of the concurrent queues, depending on its privilege level, exits the lock and waits on the event.

When a thread wants to release its ownership of the data object, it gets the lock and iterates the array of ConcurrentQueues from the highest-privilege end down, looking for an event, (ie queue count>0). If it finds one, it signals it and exits the lock, if not, it sets the 'dataFree' boolean and and exits the lock.

When a thread waiting on an event for access to the data is made ready, it may access the data object.

I thnk that should work. Please, other developers, check this design and see if you can think of any races etc? I'm still suffering somewhat from 'hospitality overload' after a trip to CZ..

Edit - probably don't even need concurrent queues because of the explicit lock across them all. Any old queue would do.

like image 43
Martin James Avatar answered Sep 19 '22 22:09

Martin James