Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

shared_mutex lock ordering

I was under the impression a multiple reader / single writer pattern implemented with c++17's std::shared_mutex could potentially never relinquish unique locks if too many shared locks were acquired.

After digging on cppreference, I am unsure that is the case. Specifically :

All lock and unlock operations on a single mutex occur in a single total order

For example, given the following operations on a shared_mutex, I believed the unique_lock may never acquire. Assuming an infinite amount of shared_locks, and that these locks acquire before the first shared_locks release.

shared_lock
shared_lock
shared_lock

unique_lock

shared_lock
[...]
shared_lock

Giving the following characteristics.

{ shared_lock, shared_lock, shared_lock, shared_lock, ..., shared_lock } // never releases

unique_lock

However, if I understand cppreference correctly, once the unique_lock tries to acquire, consecutive shared_locks would block until the unique_lock is released. Giving the following threading characteristics.

{ shared_lock, shared_lock, shared_lock} // simultaneous

unique_lock

{ shared_lock, ..., shared_lock} // waits, then simultaneous

So my question is, does a std::shared_mutex keep ordering between shared and unique locks? Preventing the case where unique_locks are never acquired due to an overwhelming amount of shared_locks being acquired.

edit :

Here is a code example to help understand the problem, and for posterity's sake. On MSVC 2019, shared_mutex is safe and ordering happens as desired. The unique_lock does get processed before the "infinite" amount of shared_locks.

The question now becomes, is this platform dependent?

#include <chrono>
#include <cstdio>
#include <mutex>
#include <shared_mutex>
#include <thread>
#include <vector>

using namespace std::chrono_literals;

std::shared_mutex smtx;

int main(int, char**) {

    std::vector<std::thread> threads;

    auto read_task = [&]() {
        std::shared_lock l{ smtx };
        printf("read\n");
        std::this_thread::sleep_for(1s);
    };

    auto write_task = [&]() {
        std::unique_lock l{ smtx };
        printf("write\n");
        std::this_thread::sleep_for(1s);
    };

    // Create a few reader tasks.
    threads.emplace_back(read_task);
    threads.emplace_back(read_task);
    threads.emplace_back(read_task);


    // Try to lock a unique_lock before read tasks are done.
    std::this_thread::sleep_for(1ms);
    threads.emplace_back(write_task);

    // Then, enque a gazillion read tasks.
    // Will the unique_lock be locked? [drum roll]

    // Would be while(true), 120 should be enough for demo
    for (size_t i = 0; i < 120; ++i) {
        std::this_thread::sleep_for(1ms);
        threads.emplace_back(read_task);
    }

    for (auto& t : threads) {
        t.join();
    }
}

Outputs :

read
read
read
write
read
...
read
like image 589
scx Avatar asked May 11 '19 15:05

scx


1 Answers

The std shared_mutex specification does not specify a priority for shared locks nor unique locks. Nor is there any API to set such a priority. One of the original motivations for the lack of specification for priority is the existence of the Alexander Terekhov algorithm as explained here.

A secondary motivation is to explain the lack of reader-writer priority policies in shared_mutex. This is due to an algorithm credited to Alexander Terekhov which lets the OS decide which thread is the next to get the lock without caring whether a unique lock or shared lock is being sought. This results in a complete lack of reader or writer starvation. It is simply fair.

The standard spec does not require the Alexander Terekhov algorithm. However it was at least my hope, that this algorithm would be preferred because of the lack of specification or API for preferring readers over writers or vice-versa.

There are more details about the Alexander Terekhov algorithm and some code which demonstrates its behavior in this SO answer here.

like image 199
Howard Hinnant Avatar answered Nov 09 '22 16:11

Howard Hinnant