First of all: I am completely a newbie in mutex/multithread programming, so sorry for any error in advance...
I have a program that runs multiple threads. The threads (usually one per cpu core) do a lot of calculation and "thinking" and then sometimes they decide to call a particular (shared) method that updates some statistics. The concurrency on statistics updates is managed through the use of a mutex:
stats_mutex.lock(); common_area->update_thread_stats( ... ); stats_mutex.unlock();
Now to the problem. Of all those threads there is one particular thread that need almost
realtime priority, because it's the only thread that actually operates.
With "almost realtime priority" I mean:
Let's suppose thread t0 is the "privileged one" and t1....t15 are the normal ones.What happens now is:
What I need is:
So, what's the best (possibly simplest) method to do this thing?
What I was thinking is to have a bool variable called "privileged_needs_lock".
But I think I need another mutex to manage access to this variable... I dont know if this is the right way...
Additional info:
Any idea is appreciated. Thanks
The below solution works (three mutex way):
#include <thread> #include <iostream> #include <mutex> #include "unistd.h" std::mutex M; std::mutex N; std::mutex L; void lowpriolock(){ L.lock(); N.lock(); M.lock(); N.unlock(); } void lowpriounlock(){ M.unlock(); L.unlock(); } void highpriolock(){ N.lock(); M.lock(); N.unlock(); } void highpriounlock(){ M.unlock(); } void hpt(const char* s){ using namespace std; //cout << "hpt trying to get lock here" << endl; highpriolock(); cout << s << endl; sleep(2); highpriounlock(); } void lpt(const char* s){ using namespace std; //cout << "lpt trying to get lock here" << endl; lowpriolock(); cout << s << endl; sleep(2); lowpriounlock(); } int main(){ std::thread t0(lpt,"low prio t0 working here"); std::thread t1(lpt,"low prio t1 working here"); std::thread t2(hpt,"high prio t2 working here"); std::thread t3(lpt,"low prio t3 working here"); std::thread t4(lpt,"low prio t4 working here"); std::thread t5(lpt,"low prio t5 working here"); std::thread t6(lpt,"low prio t6 working here"); std::thread t7(lpt,"low prio t7 working here"); //std::cout << "All threads created" << std::endl; t0.join(); t1.join(); t2.join(); t3.join(); t4.join(); t5.join(); t6.join(); t7.join(); return 0; }
Tried the below solution as suggested but it does not work (compile with " g++ -std=c++0x -o test test.cpp -lpthread"):
#include <thread> #include <mutex> #include "time.h" #include "pthread.h" std::mutex l; void waiter(){ l.lock(); printf("Here i am, waiter starts\n"); sleep(2); printf("Here i am, waiter ends\n"); l.unlock(); } void privileged(int id){ usleep(200000); l.lock(); usleep(200000); printf("Here i am, privileged (%d)\n",id); l.unlock(); } void normal(int id){ usleep(200000); l.lock(); usleep(200000); printf("Here i am, normal (%d)\n",id); l.unlock(); } int main(){ std::thread tw(waiter); std::thread t1(normal,1); std::thread t0(privileged,0); std::thread t2(normal,2); sched_param sch; int policy; pthread_getschedparam(t0.native_handle(), &policy, &sch); sch.sched_priority = -19; pthread_setschedparam(t0.native_handle(), SCHED_FIFO, &sch); pthread_getschedparam(t1.native_handle(), &policy, &sch); sch.sched_priority = 18; pthread_setschedparam(t1.native_handle(), SCHED_FIFO, &sch); pthread_getschedparam(t2.native_handle(), &policy, &sch); sch.sched_priority = 18; pthread_setschedparam(t2.native_handle(), SCHED_FIFO, &sch); tw.join(); t1.join(); t0.join(); t2.join(); return 0; }
Often one wishes to use threads within a program with differing intrinsic priorities. For example, a thread which deals with the user interface should have a lower latency compared to a compute thread, so that the user experience is improved.
No, only the order of acquisition is important. As long as you hold them, you can release Mutexes in any order. It may be more "efficient" if work can be done somewhere else with only one of the Mutexes to have a specific order of release, but it's still deadlock-free.
Mutexes are used to protect shared resources. If the mutex is already locked by another thread, the thread waits for the mutex to become available. The thread that has locked a mutex becomes its current owner and remains the owner until the same thread has unlocked it.
The non-member function lock allows to lock more than one mutex object simultaneously, avoiding the potential deadlocks that can happen when multiple threads lock/unlock individual mutex objects in different orders.
I can think of three methods using only threading primitives:
Three mutexes would work here:
Access patterns are:
That way the access to the data is protected, and the high-priority thread can get ahead of the low-priority threads in access to it.
The primitive way to do this is with a condition variable and an atomic:
Data access patterns:
Alternatively you can use two non-atomic bools with a condvar; in this technique the mutex/condvar protects the flags, and the data is protected not by a mutex but by a flag:
Mutex M;
Condvar C;
bool data_held, hpt_waiting;
Low-priority thread: lock M, while (hpt_waiting or data_held) wait C on M, data_held := true, unlock M, { do stuff }, lock M, data_held := false, broadcast C, unlock M
High-priority thread: lock M, hpt_waiting := true, while (data_held) wait C on M, data_held := true, unlock M, { do stuff }, lock M, data_held := false, hpt_waiting := false, broadcast C, unlock M
Put requesting threads on a 'priority queue'. The privileged thread can get first go at the data when it's free.
One way to do this would be withan array of ConcurrentQueues[privilegeLevel], a lock and some events.
Any thread that wants at the data enters the lock. If the data is free, (boolean), it gets the data object and exits the lock. If the data is in use by another thread, the requesting thread pushes an event onto one of the concurrent queues, depending on its privilege level, exits the lock and waits on the event.
When a thread wants to release its ownership of the data object, it gets the lock and iterates the array of ConcurrentQueues from the highest-privilege end down, looking for an event, (ie queue count>0). If it finds one, it signals it and exits the lock, if not, it sets the 'dataFree' boolean and and exits the lock.
When a thread waiting on an event for access to the data is made ready, it may access the data object.
I thnk that should work. Please, other developers, check this design and see if you can think of any races etc? I'm still suffering somewhat from 'hospitality overload' after a trip to CZ..
Edit - probably don't even need concurrent queues because of the explicit lock across them all. Any old queue would do.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With