What I need to do is to store a one-to-one mapping. The dataset consists of a large number of key-value pairs of the same kind (10M+). For example, one could use a single instance of HashMap object in Java for storing such data.
The first way to do this is to store lots of key-value pairs, like this:
SET map:key1 value1
...
SET map:key900000 value900000
GET map:key1
The second option is to use a single "Hash":
HSET map key1 value
...
HSET map key900000 value900000
HGET map key1
Redis Hashes have some convenient commands (HMSET
, HMGET
, HGETALL
, etc.), and they don't pollute the keyspace, so this looks like a better option. However, are there any performance or memory considerations when using this approach?
Yes, as Itamar Haber says, you should look at this redis memory optimization guide. But you should also keep in mind a few more things:
hash-max-zipmap-entries
and valid hash-max-zipmap-value
if memory is the main target. Be sure to understand what hash-max-zipmap-entries
and hash-max-zipmap-value
mean. Also, take some time to read about ziplist.hash-max-zipmap-entries
with 10M+ keys; instead, you should break one HSET into multiple slots. For example, you set hash-max-zipmap-entries
as 10,000. So to store 10M+ keys you need 1000+ HSET keys with 10,000 each. As a rough rule of thumb: crc32(key) % maxHsets.It may be useful to read about:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With