I have the result of a query that is very expensive. It is the join of several tables and a map reduce job.
This is cached in memcached
for 15 minutes. Once the cache expires the queries are obviously run and the cache warmed again.
But at the point of expiration the thundering herd problem
issue can happen.
One way to fix this problem, that I do right now is to run a scheduled task that kicks in the 14th minute. But somehow this looks very sub optimal to me.
Another approach I like is nginx’s proxy_cache_use_stale updating;
mechanism.
The webserver/machine continues to deliver stale cache while a thread kicks in the moment expiration happens and updates the cache.
Has someone applied this to memcached
scenario though I understand this is a client side strategy?
If it benefits, I use Django
.
You might have a look at https://github.com/ericflo/django-newcache
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With