I'm acquiring some resources in increasing order. Which version is better? I was told that #2 leads to starvation of threads wanting higher numbered resources. Is this true? If so, how and why?
a[] sorted array
1.
for(int i = 1; i < N; ++i) {
lock(mutex)
while(!resource_available[a[i]]) {
pthread_cond_wait(&cond_w[a[i]], &mutex);
}
resource_available[a[i]] = 0;
unlock(mutex)
}
2.
lock(mutex)
for(int i = 1; i < N; ++i) {
while(!resource_available[a[i]]) {
pthread_cond_wait(&cond_w[a[i]], &mutex);
}
resource_available[a[i]] = 0;
}
unlock(mutex)
EDIT: It turns out that order in which you release resources makes the difference, not above constructs. If you release them in order you received them starvation happens, if in opposite then probably not.
Pthread uses sys_clone() to create new threads, which the kernel sees as a new task that happens to share many data structures with other threads. To do synchronization, pthread relies heavily on futexes in the kernel.
pthread is outdated since availability of C11 which introduced standard threading in C. The header files is <threads. h> with functions like thrd_create . The standard functions for threading, conditions, and signalling, provide guarantees that pthreads cannot.
You are not required to call pthread_exit . The thread function can simply return when it's finished. From the man page: An implicit call to pthread_exit() is made when a thread other than the thread in which main() was first invoked returns from the start routine that was used to create it.
A process can exit at any time when a thread calls the exit subroutine. Similarly, a thread can exit at any time by calling the pthread_exit subroutine. Calling the exit subroutine terminates the entire process, including all its threads.
Both are going to be virtually equivalent, since in example 1 the thread will almost always reacquire the mutex without sleeping immediately after unlocking it, since there's only two expressions evaluated in between.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With