Our current caching implementation caches large amounts of data in report objects (50MB in some cases).
We’ve moved from memory cache to file cache and use ProtoBuf to serialize and de-serialize. This works well, however we are now experimenting with Redis cache. Below is an example of how much longer it takes for Redis than using the file system. (Note: using protobuf instead of JsonConvert improves set time to 15 seconds and get time to 4 seconds in the below example, when setting a byte array).
// Extremely SLOW – caching using Redis (JsonConvert to serialize/de-serialize)
IDatabase cache = Connection.GetDatabase();
// 23 seconds!
cache.StringSet("myKey", JsonConvert.SerializeObject(bigObject));
// 5 seconds!
BigObject redisResult = JsonConvert.DeserializeObject<BigObject>(cache.StringGet("myKey"));
// FAST - caching using file system (protobuf to serialize/de-serialize)
IDataAccessCache fileCache = new DataAccessFileCache();
// .5 seconds
fileCache.SetCache("myKey",bigObject);
// .5 seconds
BigObject fileResult = fileCache.GetCache<BigObject>("myKey");
Thanks in advance for any help.
ps. I didn’t find an answer from similar questions asked. Caching large objects - LocalCache performance
or
Caching large objects, reducing impact of retrieval times
You can store up to 512 megabytes in a Redis string. It can store any type of data, like text, integers, floats, videos, images, or audio files.
Your cache can grow to almost any size as long as you have the RAM to handle it so 30 MB won't be a problem unless you are on a very limited device. You can specify an optional "size" limit on your MemoryCache instance, but that is optional and defaults to there being no limit.
Caching. Redis is a great choice for implementing a highly available in-memory cache to decrease data access latency, increase throughput, and ease the load off your relational or NoSQL database and application.
Azure Redis Cache is Generally Available in sizes up to 53 GB and has an availability SLA of 99.9%. The new premium tier offers sizes up to 530 GB and support for clustering, VNET, and persistence, with a 99.9% SLA.
Redis actually is not designed for storing large objects (many MB) because it is a single-thread server. So, one request will be fast enough, but a few requests will be slow because they all will be processed by one thread. In the last versions some optimizations were done.
Speed of RAM and memory bandwidth seem less critical for global performance especially for small objects. For large objects (>10 KB), it may become noticeable though. Usually, it is not really cost-effective to buy expensive fast memory modules to optimize Redis. https://redis.io/topics/benchmarks
So, you can use Jumbo frames or buy a faster memory if it is possible. But actually it won't help significantly. Consider using Memcached instead. It is multi-threaded and can be scaled out horizontally to support large amount of data. Redis can be scaled only with master-slave replication.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With