I have model TicketType
which has about 500 instances.
It changes only a few times per week.
But if it changes, I need to invalidate all cached values which used the old TicketTypes.
Unfortunately some cache keys are not fixed. They contain computed data.
I see these solutions:
Use the version
argument and update the version value on a post save signal handler of TicketType
.
Use a common prefix for all cache keys which are based on TicketType. Then invalidate all cache keys in a post save signal handler.
I guess there is a third, and better way ...
Example:
TicketType is a tree. Visibility of TicketTypes are bound to permissions. Two users might see the tree in a different way, if they have different permissions. We cache it, according to the permissions. The permissions of a user gets serialized and hashed. The cache key gets created by creating a string which contains the hash and a fixed part:
hash_key='ticket-type-tree--%s' % hashed_permissions
If the TicketType tree changes, we need to be sure, that no old data gets loaded from the cache. Active invalidating is not needed, as long as no old data gets used.
Cache invalidation refers to process during which web cache proxies declare cached content as invalid, meaning it will not longer be served as the most current piece of content when it is requested. Several invalidation methods are possible, including purging, refreshing and banning.
Cache invalidation describes the process of actively invalidating stale cache entries when data in the source of truth mutates. If a cache invalidation gets mishandled, it can indefinitely leave inconsistent values in the cache that are different from what's in the source of truth.
The Invalidation allows us to remove object(s) from the Cloudfront cache before it expires. It allows you to remove a specific object from cache as well use supported wildcard character to remove multiple objects. You can also remove all the objects from cache by using “/*” parameters to invalidation requests.
You can use the ticket modification time as part of your cache key.
hash_key = 'ticket-type-tree--%s-%s' % (hashed_permissions, tree.lastmodified)
You can add a DateTimeField
with auto_now=True
. If getting the modification time from the db is too expensive, you may cache that as well.
Usually, updating the cache in a post_save
signal handler is fine. Unless you want to have consistent data at all times and want to pay the extra cost for transactions.
Use redis to cache your models
The way I would cache my instances would be the following:
1-Make sure you are getting one item at the time. E.g: Model.objects.get(foo='bar'), and you're using attribute foo
every time to get the model from the database. That will be used to make sure the data get invalidated later.
2-Override method save() and make sure it saves the data to the cache by using foo attribute.
E.g:
class Model(model.Model):
foo = models.CharField()
bar = models.CharField()
def save(self, *args, **kwargs):
redis.set(foo, serialize_model())
super(Model, self).save(*args, **kwargs)
def serialize_model():
return serilized_object
3-Override get method to get the serialized object before hitting the database.
E.g:
class Model(model.Model):
...
def get(self, *args, **kwargs):
if redis.get(self.foo):
return redis.get(self.foo)
else:
return super(Model).get(*args, **kwargs)
4-Override your delete method to remove the cache in case if the instance was removed or deleted
E.g
class Model(model.Model):
...
def delete(self,*args, **kwargs):
redis.delete(self.foo)
super(Model, self).delete(*args, **kwargs)
Replace Model class with your model, in this case it would be Ticket Type
One thing, I'm assuming you will not touch the database outside your Django app. If you're using raw sql in any other place, this will not work.
Look for redis functions on their website, they have a function to delete, set, and get. If you're using other caching way. Look for how to set, get and delete.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With