I am working on developing a key value store using redis. I proposed using a hashmap
of type String(key)-->Object(value)
. I am advised to serilaize
the object using protobuf
.
If we are going to populate as well as read this data using Java (which is platform independent), is there any advantage of using protobuf
? Will just putting the object directly into redis and getting it back and casting it lead to any problems?
A lot of emphasis is there on efficiency in this product so we dont want to do any unnecessary processing.
There is absolutely no need to use protobuf with redis; the key is usually simply: to pick a serialization framework that is going to reliably get your data back today, tomorrow and next year. You could just as well use json, xml, etc. In many cases, a single string value is more than sufficient, bypassing serialization completely (unless you count "encoding" as serialization).
I would usually advise against platform-proprietary serializations, as they might not help you if you need to get the data back in (say) C++ in a years time, and they are usually less flexible in terms of versioning.
Protobuf is a reasonable choice, as it has as key features:
However, other serializers would work too. You could even just use plain text and a redis hash, i.e. a hash-property per object-property. However, in most cases you want to get an entire object, so a simple "get" and handing the data to a suitable serialization API is usually more appropriate.
In our own use of redis, we do happen to use protobuf, but we also do a speculative "does the protobuf output compress with gzip at all?" - if it does, we send the gzip data (or we store the original uncompressed data if smaller - and obviously a marker to say which it is).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With