I'd like to write a function that is accessible only by a single thread at a time. I don't need busy waits, a brutal 'rejection' is enough if another thread is already running it. This is what I have come up with so far:
std::atomic<bool> busy (false);
bool func()
{
if (m_busy.exchange(true) == true)
return false;
// ... do stuff ...
m_busy.exchange(false);
return true;
}
std::memory_order_acq_rel
? As far as I understand a relaxed ordering (std::memory_order_relaxed
) wouldn't be enough to prevent reordering.Your atomic swap implementation might work. But trying to do thread safe programming without a lock is most always fraught with issues and is often harder to maintain.
Unless there's a performance improvement that's needed, then std::mutex
with the try_lock()
method is all you need, eg:
std::mutex mtx;
bool func()
{
// making use of std::unique_lock so if the code throws an
// exception, the std::mutex will still get unlocked correctly...
std::unique_lock<std::mutex> lck(mtx, std::try_to_lock);
bool gotLock = lck.owns_lock();
if (gotLock)
{
// do stuff
}
return gotLock;
}
Your code looks correct to me, as long as you leave the critical section by falling out, not returning or throwing an exception.
You can unlock with a release
store; an RMW (like exchange) is unnecessary. The initial exchange only needs acquire
. (But does need to be an atomic RMW like exchange
or compare_exchange_strong
)
Note that ISO C++ says that taking a std::mutex
is an "acquire" operation, and releasing is is a "release" operation, because that's the minimum necessary for keeping the critical section contained between the taking and the releasing.
Your algo is exactly like a spinlock, but without retry if the lock's already taken. (i.e. just a try_lock). All the reasoning about necessary memory-order for locking applies here, too. What you've implemented is logically equivalent to the try_lock
/ unlock
in @selbie's answer, and very likely performance-equivalent, too. If you never use mtx.lock()
or whatever, you're never actually blocking i.e. waiting for another thread to do something, so your code is still potentially lock-free in the progress-guarantee sense.
Rolling your own with an atomic<bool>
is probably good; using std::mutex
here gains you nothing; you want it to be doing only this for try-lock and unlock. That's certainly possible (with some extra function-call overhead), but some implementations might do something more. You're not using any of the functionality beyond that. The one nice thing std::mutex
gives you is the comfort of knowing that it safely and correctly implements try_lock
and unlock
. But if you understand locking and acquire / release, it's easy to get that right yourself.
The usual performance reason to not roll your own locking is that mutex
will be tuned for the OS and typical hardware, with stuff like exponential backoff, x86 pause
instructions while spinning a few times, then fallback to a system call. And efficient wakeup via system calls like Linux futex
. All of this is only beneficial to the blocking behaviour. .try_lock
leaves that all unused, and if you never have any thread sleeping then unlock
never has any other threads to notify.
There is one advantage to using std::mutex
: you can use RAII without having to roll your own wrapper class. std::unique_lock
with the std::try_to_lock
policy will do this. This will make your function exception-safe, making sure to always unlock before exiting, if it got the lock.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With