I am building an architecture on AWS with several EC2 instances as webservers and a central MySQL database (RDS). The EC2 instances have Redis installed for caching single db rows. When a row changes in MySQL, I want every instance to update the corresponding cache entries too.
What is the best way to do this in the AWS enviroment?
Don't use triggers for this. Ensure things are properly committed (as opposed to rolled back), and then flush from within the application layer.
If you don't, you can have a scenario where concurrent requests are re-filling the cache with the old data (since they don't see the new data yet) as it'll get deleted from the cache in your SQL trigger.
If you are using a queue server (amazon SQS, redis pubsub, etc) you could put an entry onto a queue for each record you want expired, and have a worker listening to the queue and when it gets a message to tell it which record to invalidate it will connect to cache and expire that record.
This works if you have one cache server or many, you just need one worker for each cache server that you have, or one worker that can connect to each cache server. Many workers is more scalable.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With