We're currently looking for the most suitable solution for accessing critical data on a distributed system, and we're considering whether to use in memory caching, versus a centralized cache.
Some information about the data we wish to store/access:
The way we see it is as following -
In memory cache
Pros:
Cons:
Centralized cache
For the sake of conversation, we've considered using Redis.
Pros:
Cons:
Memory is pooled into a single data store or data cache to provide faster access to data. Distributed caches are typically housed in a single physical server kept on site.
Centralized cache management enables you to specify paths to directories that are cached by HDFS, thereby improving performance for applications that repeatedly access the same data. Centralized cache management in HDFS is an explicit caching mechanism.
The Distributed Memory Cache (AddDistributedMemoryCache) is a framework-provided implementation of IDistributedCache that stores items in memory. The Distributed Memory Cache isn't an actual distributed cache. Cached items are stored by the app instance on the server where the app is running.
I don't find any problem in going for a centralized cache using Redis.
Even if cache is not available system should work (with delayed time obviously). Meaning app logic should check for cache in redis if it's not there or system itself is not available it should get the value from dB and then populate it to redis and then serve to the client.
In this way even if your redis master and slave are down your application will work fine but with a delay. And also your cache will be up to date.
Hope this helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With