Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Following pointers in a multithreaded environment

If I have some code that looks something like:

typedef struct {
    bool some_flag;

    pthread_cond_t  c;
    pthread_mutex_t m;
} foo_t;

// I assume the mutex has already been locked, and will be unlocked
// some time after this function returns. For clarity. Definitely not
// out of laziness ;)
void check_flag(foo_t* f) {
    while(f->flag)
        pthread_cond_wait(&f->c, &f->m);
}

Is there anything in the C standard preventing an optimizer from rewriting check_flag as:

void check_flag(foo_t* f) {
    bool cache = f->flag;
    while(cache)
        pthread_cond_wait(&f->c, &f->m);
}

In other words, does the generated code have to follow the f pointer every time through the loop, or is the compiler free to pull the dereference out?

If it is free to pull it out, is there any way to prevent this? Do I need to sprinkle a volatile keyword somewhere? It can't be check_flag's parameter because I plan on having other variables in this struct that I don't mind the compiler optimizing like this.

Might I have to resort to:

void check_flag(foo_t* f) {
    volatile bool* cache = &f->some_flag;
    while(*cache)
        pthread_cond_wait(&f->c, &f->m);
}
like image 853
Clark Gaebel Avatar asked Jan 13 '11 23:01

Clark Gaebel


3 Answers

In the general case, even if multi-threading wasn't involved and your loop looked like:

void check_flag(foo_t* f) {
    while(f->flag)
        foo(&f->c, &f->m);
}

the compiler would be unable to to cache the f->flag test. That's because the compiler can't know whether or not a function (like foo() above) might change whatever object f is pointing to.

Under special circumstances (foo() is visible to the compiler, and all pointers passed to the check_flag() are known not to be aliased or otherwise modifiable by foo()) the compiler might be able to optimize the check.

However, pthread_cond_wait() must be implemented in a way that would prevent that optimization.

See Does guarding a variable with a pthread mutex guarantee it's also not cached?:

You might also be interested in Steve Jessop's answer to: Can a C/C++ compiler legally cache a variable in a register across a pthread library call?

But how far you want to take the issues raised by Boehm's paper in your own work is up to you. As far as I can tell, if you want to take the stand that pthreads doesn't/can't make the guarantee, then you're in essence taking the stand that pthreads is useless (or at least provides no safety guarantees, which I think by reduction has the same outcome). While this might be true in the strictest sense (as addressed in the paper), it's also probably not a useful answer. I'm not sure what option you'd have other than pthreads on Unix-based platforms.

like image 196
Michael Burr Avatar answered Nov 09 '22 02:11

Michael Burr


Normally, you should try to lock the pthread mutex before waiting on the condition object as the pthread_cond_wait call release the mutex (and reacquire it before returning). So, your check_flag function should be rewritten like that to conform to the semantic on the pthread condition.

void check_flag(foo_t* f) {
    pthread_mutex_lock(&f->m);
    while(f->flag)
        pthread_cond_wait(&f->c, &f->m);
    pthread_mutex_unlock(&f->m);
}

Concerning the question of whether or not the compiler is allowed to optimize the reading of the flagfield, this answer explains it in more detail than I can.

Basically, the compiler know about the semantic of pthread_cond_wait, pthread_mutex_lock and pthread_mutex_unlock. He know that he can't optimize memory reading in those situation (the call to pthread_cond_wait in this exemple). There is no notion of memory barrier here, just a special knowledge of certain function, and some rule to follow in their presence.

There is another thing protecting you from optimization performed by the processor. Your average processor is capable of reordering memory access (read / write) provided that the semantic is conserved, and it is always doing it (as it allow to increase performance). However, this break when more than one processor can access the same memory address. A memory barrier is just an instruction to the processor telling it that it can move the read / write that were issued before the barrier and execute them after the barrier. It has finish them now.

like image 3
Sylvain Defresne Avatar answered Nov 09 '22 03:11

Sylvain Defresne


As written, the compiler is free to cache the result as you describe or even in a more subtle way - by putting it into a register. You can prevent this optimization from taking place by making the variable volatile. But that is not necessarily enough - you should not code it this way! You should use condition variables as prescribed (lock, wait, unlock).

Trying to do work around the library is bad, but it gets worse. Perhaps reading Hans Boehm's paper on the general topic from PLDI 2005 ("Threads Cannot be Implemented as a Library"), or many of his follow-on articles (which lead up to work on a revised C++ memory model) will put the fear of God in you and steer you back to the straight and narrow :).

like image 3
EmeryBerger Avatar answered Nov 09 '22 03:11

EmeryBerger