Hash:
./redis-cli -c -p 7000 hlen 0
(integer) 7746812
./redis-cli -c -p 7000 hlen 1
(integer) 7746812
./redis-cli -c -p 7000 hlen 2
(integer) 7746812
./redis-cli -c -p 7000 hlen 3
(integer) 7746812
./redis-cli -c -p 7000 hlen 4
(integer) 7746812
./redis-cli -c -p 7000 hlen 5
(integer) 0
Memory for each hash:
./redis-cli -c -p 7000 keys '*'
1) "3"
./redis-cli -c -p 7000 memory usage 3
(integer) 415715543
./redis-cli -c -p 7001 keys '*'
1) "2"
2) "1"
memory usage for each keys:
./redis-cli -c -p 7001 memory usage 1
(integer) 415715543
./redis-cli -c -p 7001 memory usage 2
(integer) 415715543
./redis-cli -c -p 7002 memory usage 0
(integer) 415715543
./redis-cli -c -p 7002 memory usage 4
(integer) 415715543
Memory usage cluster level:
./redis-cli -c -p 7001 info memory
# Memory
used_memory:1004513344
used_memory_human:**957.98M**
used_memory_rss:1030799360
used_memory_rss_human:983.05M
used_memory_peak:1004615496
used_memory_peak_human:958.08M
used_memory_peak_perc:99.99%
used_memory_overhead:2568042
used_memory_startup:1449576
used_memory_dataset:1001945302
used_memory_dataset_perc:99.89%
allocator_allocated:1004619400
allocator_active:1004859392
allocator_resident:1022844928
total_system_memory:75798228992
total_system_memory_human:70.59G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.00
allocator_frag_bytes:239992
allocator_rss_ratio:1.02
allocator_rss_bytes:17985536
rss_overhead_ratio:1.01
rss_overhead_bytes:7954432
mem_fragmentation_ratio:1.03
mem_fragmentation_bytes:26347944
mem_not_counted_for_evict:3162
mem_replication_backlog:1048576
mem_clients_slaves:16922
mem_clients_normal:49694
mem_aof_buffer:3162
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
Same for node 7002 And 480MB for node 7000 which has only one hash.
Question:
Each hash takes 415MB
But why memory used is 480MB for one hash and 958MB for 2 hashses.
I printed the list of keys also in the same cluster.
Calculations are not tallying properly.
What am I missing here? Kindly advice.
It is not because of this also. I did memory purge. After that also, memory remains the same.
Redis compiled with 32 bit target uses a lot less memory per key, since pointers are small, but such an instance will be limited to 4 GB of maximum memory usage.
Redis Cluster data shardingRedis Cluster does not use consistent hashing, but a different form of sharding where every key is conceptually part of what we call a hash slot. There are 16384 hash slots in Redis Cluster, and to compute the hash slot for a given key, we simply take the CRC16 of the key modulo 16384.
How It Works. Redis Cluster is an active-passive cluster implementation that consists of master and slave nodes. The cluster uses hash partitioning to split the key space into 16,384 key slots, with each master responsible for a subset of those slots.
Hence, the main culprit for excessive memory usage with Redis is application behaviour. Your application may be storing unnecessary data that does not benefit from being in Redis, or even completely redundant data, i.e, data that's never used for any purpose, as this user was.
Redis has internal structure which occupies memory apart from names and values. It is called "Memory Overhead" in redis.
This is the reason for memory change for the hash and the cluster.
We can utilize ziplist
to make hashes memory efficient.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With