I need to cache data coming from a ReactiveMongoRepository
. The data gets updated roughly twice a year, so I don't care about expiring the cache.
Since we can't use @Cacheable with a flux, I'd like to find a straightforward, easy way to store the data coming from Mongo to redis, and use that data if present, else store it and serve the original data.
Is there are more straightforward way to do so than
@GetMapping
public Flux<AvailableInspection> getAvailableInspectionsRedis() {
AtomicInteger ad = new AtomicInteger();
return availableInspectionReactiveRedisOperations.opsForZSet().range("availableInspections", Range.<Long>from(Range.Bound.inclusive(0L)).to(Range.Bound.inclusive(-1L)))
.switchIfEmpty(availableInspectionMongoRepository.findAll().map(e -> {
availableInspectionReactiveRedisOperations.opsForZSet().add("availableInspections", e, ad.getAndIncrement()).block();
return e;
}));
}
What I'm looking for explicitly is an option that lets me cache the data just as the @Cacheable annotation would do it. I'm looking for a generic solution to be able to cache any kind of flux.
Regarding scalability, Spring Async gives us better results than synchronous Spring MVC implementation. Spring WebFlux, because of its reactive nature, provides us elasticity and higher availability.
There are several reasons for this: Spring MVC can't run on Netty. both infrastructure will compete for the same job (for example, serving static resources, the mappings, etc) mixing both runtime models within the same container is not a good idea and is likely to perform badly or just not work at all.
Spring WebFlux is a good fit for highly concurrent applications, applications that need to be able to process a large number of requests with as few resources as possible, for applications that need scalability or for applications that need to stream request data in a live manner.
WebFlux Spring is an awesome and high quality framework, offering an easy way to non-blocking system development. Generally speaking, the technology is mature and production ready.
I doubt there is an off-the-shelf solution for this problem. However, you can easily build your own interface for getting generic cached objects and loading them to cache:
public interface GetCachedOrLoad<T> {
Flux<T> getCachedOrLoad(String key, Flux<T> loader, Class<? extends T> clazz);
}
Each class that requires this functionality will just inject it via constructor and use it as follows:
public class PersistedObjectRepository {
private final GetCachedOrLoad<PersistedObject> getCachedOrLoad;
public PersistedObjectRepository(final GetCachedOrLoad<PersistedObject> getCachedOrLoad) {
this.getCachedOrLoad = getCachedOrLoad;
}
public Flux<PersistedObject> queryPersistedObject(final String key) {
return getCachedOrLoad.getCachedOrLoad(key, queryMongoDB(key), PersistedObject.class);
}
private Flux<PersistedObject> queryMongoDB(String key) {
// use reactivemongo api to retrieve Flux<PersistedObject>
}
}
And then you'll need to create an object implementing GetCachedOrLoad<T>
and make it available for dependency injection.
public class RedisCache<T> implements GetCachedOrLoad<T> {
private final Function<String, Flux<String>> getFromCache;
private final BiConsumer<String, String> loadToCache;
private final Gson gson;
public RedisCache(Gson gson, RedisReactiveCommands<String, String> redisCommands) {
this.getFromCache = key -> redisCommands.lrange(key, 0, -1);
this.loadToCache = redisCommands::lpush;
this.gson = gson;
}
@Override
public Flux<T> getCachedOrLoad(final String key, Flux<T> loader, Class<? extends T> clazz) {
final Flux<T> cacheResults = getFromCache.apply(key)
.map(json -> gson.fromJson(json, clazz));
return cacheResults.switchIfEmpty(
loader.doOnNext(value -> loadToCache.accept(key, gson.toJson(value))));
}
}
Hope this is generic enough :) .
PS. This is not a production-ready implementation and needs to be tuned for your own needs like adding exception-handling, customizing json serialization and so on.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With