Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Redis: Multiple unique keys versus bucketing through Hash

Tags:

redis

I have total six type of keys, say a,b,..,f each having around a million subkeys, like a1,a2,...a99999(different in each bucket). What is the faster way to access?

  1. Having separate keys by combining bucket name and key like: a_a1,b_b1 etc.
  2. Use hash for 6 keys to have buckets and then have 1 million keys in each?

I search stack-overflow and couldn't find such comparison when I have few buckets with huge number of keys!

Edit1: Every key and value is string only at maximum 100 characters. I would access it using Jedis library of Java making transactions

like image 402
Mangat Rai Modi Avatar asked Feb 26 '15 07:02

Mangat Rai Modi


1 Answers

Your question remind me this article. It doesn't contains performance benchmarks but seems like your second case (with buckets of keys) will have appropriate performance and small memory footprint.

like image 166
Maxim Avatar answered Sep 18 '22 14:09

Maxim