getvariana: tpp.c:63: __pthread_tpp_change_priority: Assertion `new_prio == -1 || (new_prio >= __sched_fifo_min_prio && new_prio <= __sched_fifo_max_prio)' failed.
Hi all,
I am trying to re-run a program which creates 5 threads and after pthread_join(), I do a return, based on which, I re-run the entire program i.e., it is in while(1) loop.
When I run the program for the second time, I get an error as you can see above. I am unable to trace its origin. Can anyone please explain why is this error caused ?
FYI: I dont use any mutex locks or semaphore. I wait for the threads to join after which I re-run the entire program. Does it have anything to do with race conditions ? I am assuming, that when I wait for all the 5 threads to join, only then I can move out of the pthread
main
{
while(1)
{
test();
}
}//main
test()
{
for( i = 0; i < 5; i++ )
pthread_create( &th[i], NULL, tfunc, &some_struct);
for( i = 0; i < 5, i++ )
pthread_join( th[i], NULL);
}
void * tfunc( void * ptr )
{
// waiting for a callback function to set a global counter to a value
// sleep until then
if( g_count == value_needed )
pthread_exit(NULL);
}
Implicit threading is the use of libraries or other language support to hide the management of threads. In the context of C, the most common implicit threading library is OpenMP. OpenMP uses the #pragma compiler directive to detect and insert additional library code at compile time.
The Thread class has the following disadvantages: With more threads, the code becomes difficult to debug and maintain. Thread creation puts a load on the system in terms of memory and CPU resources. We need to do exception handling inside the worker method as any unhandled exceptions can result in the program crashing.
Unpredictable results− Multithreaded programs can sometimes lead to unpredictable results as they are essentially multiple parts of a program that are running at the same time. Complications for Porting Existing Code − A lot of testing is required for porting existing code in multithreading.
Here is your program cleaned up. It runs without the above assertion:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <string.h>
static pthread_t th[5];
void *
tfunc (void *ptr)
{
sleep (5); /* remove this to test it without the sleep */
pthread_exit (NULL);
}
void
test ()
{
int i;
memset (th, 0, 5 * sizeof (pthread_t));
for (i = 0; i < 5; i++)
{
if (pthread_create (&th[i], NULL, tfunc, NULL) < 0)
perror ("pthread_create");
}
for (i = 0; i < 5; i++)
{
if (pthread_join (th[i], NULL) < 0)
perror ("pthread_join");
}
}
int
main (int argc, char **argv)
{
while (1)
{
test ();
}
exit (0);
}
Here's what I noticed when cleaning it up:
for( i = 0; i < 5, i++ )
comma not semicolon means loop may not have been working
in test()
, th
was not zeroed meaning any failed pthread_create
was using an old thread reference.
In tfunc
, you did a pthread_join
if ( g_count == value_needed )
, but then you exited anyway, i.e. you were always immediately doing the pthread_join
or the equivalent. Note I also tested the version below without the sleep()
, so exiting immediately now works.
various other orthographic issues.
no error handling
As there were a few compilation problems, I suspect that you may not have compiled the code you pasted above, but something more complicated. And I suspect it's part of that that's causing the issue.
If you post a minimal example of compilable code that actually causes the issue, I might be able to help you further.
tpp.c:63: __pthread_tpp_change_priority: Assertion is a known problem and solved:
https://sourceware.org/ml/libc-help/2008-05/msg00071.html
in brief, the problem is caused by repeated locking of a fast mutex
, and solved by using a recursive mutex
, and the default pthread_mutex_t
is not recursive. Is it possible that there's pthread_mutex_t
deeply inside the thread running code ??
BTW, to make the mutex recursive, plz set the mutex attribute with attribute PTHREAD_MUTEX_RECURSIVE_NP
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With