Usage: In our production we have around 100 thread which can access the cache we are trying to implement. If cache is missed then information will be fetched from the database and cache will be updated via writer thread.
To achieve this we are planning to implement multiple read and single writer
We cannot update the g++ version since we are using g++-4.4
Update: Each worker thread can work for both read and write. If cache is missed then information is cached from the DB.
Problem Statement: We need to implement the cache to enhance the performance. For this, cache read are more frequent and write operations to the cache is very much less.
I think we can use boost::shared_mutex boost::shared_lock
, boost::upgrade_lock
, boost::upgrade_to_unique_lock implementation
But we learnt that boost::shared_mutex
has performance issues:
Questions
boost::shared_mutex
impact the performance in case read are much frequent?g++4.4
?reads are lock free
?Also, we are intended to use Map
to keep the information for cache.
class spin_lock { constexpr int UNLOCKED = 0; constexpr int LOCKED = 1; std::atomic<int> m_value = 0; public: void lock() { while (true) { int expected = UNLOCKED; if (m_value. compare_exchange_strong(expected, LOCKED)) break; } } void unlock() { m_value. store(UNLOCKED); } };
A mutex is a lockable object that is designed to signal when critical sections of code need exclusive access, preventing other threads with the same protection from executing concurrently and access the same memory locations.
In C++, we create a mutex by constructing an instance of std::mutex, lock it with a call to the member function lock(), and unlock it with a call to the member function unlock().
A unique lock is an object that manages a mutex object with unique ownership in both states: locked and unlocked. On construction (or by move-assigning to it), the object acquires a mutex object, for whose locking and unlocking operations becomes responsible. The object supports both states: locked and unlocked.
If writes were non-existent, one possibility can be 2-level cache
where you first have a thread-local cache
, and then the normal cache with mutex or reader/writer lock
.
If writes are extremely rare, you can do the same. But have some lock-free way of invalidating the thread-local cache, e.g. an atomic int updated with every write, and in those cases clear the thread-local cache.
You need to profile it.
In case you're stuck because you don't have a "similar enough" environment where you can actually test things, you can probably write a simple wrapper using pthreads: pthread_rwlock_t
Of course you can design things to be lock free. Most obvious solution would be to not share state. (If you do share state, you'll have to check that your target platform supports atomic instructions). However, without any knowledge of your application domain, I feel very safe suggesting you do not want lock-free. See e.g. Do lock-free algorithms really perform better than their lock-full counterparts?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With