I am running 6 redis nodes ,3 masters and 3 slaves , every master has 1 slave .
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.17.0.5:6382 to 172.17.0.2:6379
Adding replica 172.17.0.6:6383 to 172.17.0.3:6380
Adding replica 172.17.0.7:6384 to 172.17.0.4:6381
The clustering is running and I can SET and GET Keys.
I shutdown master1 172.17.0.2:6379 , slave1 (172.17.0.5:6382) has became master ,cluster is still running .
I shutdown slave1 (172.17.0.5:6382) , I tried to SET keys I have got this error
(error) CLUSTERDOWN The cluster is down
What I expected when I shutdown master1 and slave1 , cluster will still running and accepts redis operations but the opposite has happened.
What is the reason behind this ?
Is it applicable to solve this problem without starting master1 or slave1 again ?
With Cluster Mode enabled, your Redis cluster can now scale horizontally (in or out) in addition to scaling vertically (up and down).
Redis (cluster mode enabled) supports partitioning your data across up to 500 node groups. You can dynamically change the number of shards as your business needs change. One advantage of partitioning is that you spread your load over a greater number of endpoints, which reduces access bottlenecks during peak demand.
Open the Command Prompt and change to the Redis directory and run the command c:\Redis>redis-cli -h Redis_Cluster_Endpoint -p 6379 . Run Redis commands. You are now connected to the cluster and can run Redis commands like the following.
Because some slots are stored in master1 and slave1, if both of them are down, these slots will no longer been covered by any node in the cluster. When this happens, by default, the cluster is down. You can modify the behavior by changing the cluster-require-full-coverage
option.
Quote from redis.conf
:
By default Redis Cluster nodes stop accepting queries if they detect there is at least an hash slot uncovered (no available node is serving it). This way if the cluster is partially down (for example a range of hash slots are no longer covered) all the cluster becomes, eventually, unavailable. It automatically returns available as soon as all the slots are covered again.
However sometimes you want the subset of the cluster which is working, to continue to accept queries for the part of the key space that is still covered. In order to do so, just set the cluster-require-full-coverage option to no.
cluster-require-full-coverage yes
UPDATE:
In order to ensure all slots are covered, normally, you can set up a cluster with N
masters and N + 1
slaves. Then assign a slave for each master, i.e. N -> N
. The extra slave can replicate data from a random master. When one of you master is down, the corresponding slave will become the new master. Then you can make the extra slave to replicate data from the new master.
In a word, you must ensure each master has at least one slave at any time.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With