I have a 20GB+ rdb dump in production. I suspect there's a specific set of keys bloating it. I'd like to have a way to always spot the first 100 biggest objects from static dump analysis or ask it to the server itself, which by the way has ove 7M objects.
Dump analysis tools like rdbtools are not helpful in this (I think) really common use case!
I was thinking to write a script and iterate the whole keyset with "redis-cli debug object", but I have the feeling there must be some tool I'm missing.
Redis Get Database Size To get the size of a database in Redis, use the DBSIZE command. This returns the total number of keys stored in the currently selected database. The previous command returns the number of keys in the database at index 0. Another command you can use to get the database size is the info command.
Discover BigKeys by using the BigKeys parameter in redis-cli If you do not have a clear target key for analysis but want to find out the BigKey in Redis, you can use redis-cli to achieve this goal by adding '--BigKeys' in the end.
Redis GET all Keys To list the keys in the Redis data store, use the KEYS command followed by a specific pattern. Redis will search the keys for all the keys matching the specified pattern. In our example, we can use an asterisk (*) to match all the keys in the data store to get all the keys.
For a clustered cache, you would see shards instead. From there, you can expand a cache resource to view all the keys inside it. By clicking on a Redis key name, all its contents will open in a new editor tab. With a collection type Redis key, clicking on it will reveal the individual elements under the key name.
An option was added to redis-cli: redis-cli --bigkeys
Sample output based on https://gist.github.com/michael-grunder/9257326
$ ./redis-cli --bigkeys # Press ctrl+c when you have had enough of it... :) # You can use -i 0.1 to sleep 0.1 sec every 100 sampled keys # in order to reduce server load (usually not needed). Biggest string so far: day:uv:483:1201737600, size: 2 Biggest string so far: day:pv:2013:1315267200, size: 3 Biggest string so far: day:pv:3:1290297600, size: 5 Biggest zset so far: day:topref:2734:1289433600, size: 3 Biggest zset so far: day:topkw:2236:1318723200, size: 7 Biggest zset so far: day:topref:651:1320364800, size: 20 Biggest string so far: uid:3467:auth, size: 32 Biggest set so far: uid:3029:allowed, size: 1 Biggest list so far: last:175, size: 51 -------- summary ------- Sampled 329 keys in the keyspace! Total key length in bytes is 15172 (avg len 46.12) Biggest list found 'day:uv:483:1201737600' has 5235597 items Biggest set found 'day:uvx:555:1201737600' has 47 members Biggest hash found 'day:uvy:131:1201737600' has 2888 fields Biggest zset found 'day:uvz:777:1201737600' has 1000 members 0 strings with 0 bytes (00.00% of keys, avg size 0.00) 19 lists with 5236744 items (05.78% of keys, avg size 275618.11) 50 sets with 112 members (15.20% of keys, avg size 2.24) 250 hashs with 6915 fields (75.99% of keys, avg size 27.66) 10 zsets with 1294 members (03.04% of keys, avg size 129.40)
redis-rdb-tools does have a memory report that does exactly what you need. It generates a CSV file with memory used by every key. You can then sort it and find the Top x keys.
There is also an experimental memory profiler that started to do what you need. Its not yet complete, and so isn't documented. But you can try it - https://github.com/sripathikrishnan/redis-rdb-tools/tree/master/rdbtools/cli. And of course, I'd encourage you to contribute as well!
Disclaimer: I am the author of this tool.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With