Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spring 4, sharing cache beetwen nodes

We have Spring boot application on two nodes. Now we want to keep some data in cache instead of call external service every 5 sec. The question is how to share cache between two nodes? Is it possible ? Or maybe create two seperates caches once per node ? Which approach is better ? I suppose that maintaing shared cache is quite hard. Thanks for any tips

like image 301
user3528733 Avatar asked Apr 28 '16 12:04

user3528733


People also ask

Is Hazelcast a distributed cache?

We solve this problem by using a distributed cache.Hazelcast is a distributed in-memory object store and provides many features including TTL, write-through, and scalability.

How does Java distributed cache work?

In distributed mode, the Object Caching Service for Java can share objects and communicate with other caches running either locally on the same machine or remotely across the network. Object updates and invalidations are propagated between communicating caches.

Is Redis distributed cache?

Redis is an open source in-memory data store, which is often used as a distributed cache. You can configure an Azure Redis Cache for an Azure-hosted ASP.NET Core app, and use an Azure Redis Cache for local development.

Is ehcache distributed?

is a pluggable cache for Hibernate, tuned for high concurrent load on large multi-cpu servers, provides LRU, LFU and FIFO cache eviction policies, and is production tested. Ehcache is used by LinkedIn to cache member profiles.


1 Answers

I'll pick up your term "shared cache", which stands for a clustered or distributed cache product like, for example, Infinispan, hazelcast or Apache Ignite.

You may want a shared cache for the following reasons:

Consistency: If your application updates the cache in one node, a shared cache would care about the propagation of the update and ensure that every node sees the new value after the update is finished. This is something a shared cache can give you, but not necessarily any "shared cache" product will do.

Times 10 problem: When you add up more nodes a shared cache will limit the requests to the external service, otherwise, each node might request the identical value.

Big data: This applies to a distributed cache: You can cache more data then there is space in one system.

But you get this benefits at the cost of additional configuration and deployment complexity. Also the access latency to a shared cache, is usually much higher then for a local cache. For a comparison take a look at these benchmarks.

Wrap up: You have two nodes now. If you don't have the problem of a coordinated update or invalidation, stay with a simple local cache. If you want to be future proof and have spare time to tinker with shared caches, go for it :)

like image 177
cruftex Avatar answered Oct 04 '22 12:10

cruftex