Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Amazon Elasticache Failover

We have been using AWS Elasticache for about 6 months now without any issues. Every night we have a Java app that runs which will flush DB 0 of our redis cache and then repopulate it with updated data. However we had 3 instances between July 31 and August 5 where our DB was successfully flushed and then we were not able to write the new data to the database.

We were getting the following exception in our application:

redis.clients.jedis.exceptions.JedisDataException: redis.clients.jedis.exceptions.JedisDataException: READONLY You can't write against a read only slave.

When we look at the cache events in Elasticache we can see

Failover from master node prod-redis-001 to replica node prod-redis-002 completed

We have not been able to diagnose the issue and since the app was running fine for the past 6 months I am wondering if it is something related to a recent Elasticache release that was done on the 30th of June. https://aws.amazon.com/releasenotes/Amazon-ElastiCache

We have always been writing to our master node and we only have 1 replica node.

If someone could offer any insight it would be much appreciated.

EDIT: This seems to be an intermittent problem. Some days it will fail other days it runs fine.

like image 255
DarrenCibis Avatar asked Aug 05 '15 23:08

DarrenCibis


1 Answers

We have been in contact with AWS support for the past few weeks and this is what we have found.

Most Redis requests are synchronous including the flush so it will block all other requests. In our case we are actually flushing 19m keys and it takes more then 30 seconds.

Elasticache performs a health check periodically and since the flush is running the health check will be blocked, thus causing a failover.

We have been asking the support team how often the health check is performed so we can get an idea of why our flush is only causing a failover 3-4 times a week. The best answer we can get is "We think its every 30 seconds". However our flush consistently takes more then 30 seconds and doesn't consistently fail.

They said that they may implement the ability to configure the timing of the health check however they said this would not be done anytime soon.

The best advice they could give us is:

1) Create a completely new cluster for loading the new data on, and instead of flushing the previous cluster, re-point your application(s) to the new cluster, and remove the old one.

2) If the data that you are flushing is an update version of the data, consider not flushing, but updating and overwriting new keys?

3) Instead of flushing the data, set the expiry of the items to be when you would normally flush, and let the keys be reclaimed (possibly with a random time to avoid thundering herd issues), and then reload the data.

Hope this helps :)

like image 135
DarrenCibis Avatar answered Oct 20 '22 09:10

DarrenCibis