I'm trying to reduce the amount of locking my code needs to do, and came across a bit of an academic question on how pthread_mutex_lock treats its memory barriers. To make this easy to understand, let's say the mutex is protecting a data-field that is totally static once initialized, but I want to defer this setup until the first access. The code I want to write looks like:
/* assume the code safely sets data to null at setup,
* and the mutex is correctly setup
*/
if (NULL == data) {
pthread_mutex_lock(&lock);
/* Need to re-check data in case it was already setup */
if (NULL == data)
data = deferred_setup_fcn();
pthread_mutex_unlock(&lock);
}
The possible issue I see is that data is setup inside the lock, but is read outside the lock. Is it possible for the compiler to cache the value of data across the mutex lock call? Or do I have to insert the appropriate volatile keywords to prevent that?
I know that it'd be possible to do this with a pthread_once call, but I wanted to avoid using another data-field (the lock was already there protecting related fields).
A pointer to a definitive guide on POSIX threads function call memory orderings would work great too.
The problem with this pattern is that memory barriers are between two threads, but a reader in your example may execute no instructions that imply a barrier.
Thus there is no guarantee that memory writes performed by deferred_setup_fcn()
are visible even if the write to data
is visible (from the point of view of a reader that races with a writer). That is, the reader could see data != NULL
, but when it actually tries to access the values pointed to by data
, find a half-initialised or uninitialised structure.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With