I have followed the solution provided in Stack overflow Link & it is working perfectly when i use this from my browser. However, when i tried hitting that url with curl, it doesn't cache for the browser..
Let me explain.
If i hit a url like
example.org/results?limit=7
from my chrome, it takes8-10 seconds
to load & successive hits takes time inmilliseconds
So what i did is call this URL
with the curl
command; but it didn't use the cached data & created the cache again.
So i found out the issue is with the arg
parameter in the below code as it contains the browser headers in WSGIRequest
object which is being used in caching key as it contains the headers also which i don't require for caching. This is failing my purpose of curl requests to create the cache automatically from celery task
.
@method_decorator(cache_page(60 * 60 * 24))
def dispatch(self, *arg, **kwargs):
print(arg)
print(kwargs)
return super(ProfileLikeHistoryApi, self).dispatch(*arg, **kwargs)
What can i do is to pass only the kwargs
to create the cache or any other alternative by which i can do cache for urls only not the headers
Thanks for the help in advance.
TLDR; Remove method decorator and cache manually
from django.core.cache import cache
from django.utils.encoding import force_bytes, force_text, iri_to_uri
import hashlib
def dispatch(self, *arg, **kwargs):
if self.request.method == 'GET' or self.request.method == 'HEAD':
key = hashlib.md5(force_bytes(iri_to_uri(self.request.build_absolute_uri()))))
data = cache.get(key)
if not data:
data = super(ProfileLikeHistoryApi, self).dispatch(*arg, **kwargs)
cache.set(key, data, 60*60*24)
return data
return super(ProfileLikeHistoryApi, self).dispatch(*arg, **kwargs)
Yes, you are right that cache page decorator will decide what to cache based on the headers. However only 'vary' headers are supposed to have an impact.
Secondly only GET and HEAD requests are cached (and cachable) so that's why in the above code we check the method first.
You might have heard that is obsolete and not secure. That maybe so for cryptography, but it does not apply in our case. The hash generation scheme used here is exactly the same as used by django's _generate_cache_key but we leave out the headers from the equation.
Every day there will be one guy who get's a slow page because the cache has expired. Everyone else will get stale data. Data that's as old as 23 hours and 59 minutes.
Consider running a background process or a cron that runs this tasks in the background say every 6 hours and freshens the cache.
Now that maybe a bit difficult with memcached, because it doesn't provide an easy way to find all keys with a specific pattern, but if you used redis or cached in the db, it becomes easy.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With