I'm dealing with some third-party library code that involves creating expensive objects and caching them in a Map
. The existing implementation is something like
lock.lock()
try {
Foo result = cache.get(key);
if (result == null) {
result = createFooExpensively(key);
cache.put(key, result);
}
return result;
} finally {
lock.unlock();
}
Obviously this is not the best design when Foos
for different keys
can be created independently.
My current hack is to use a Map
of Futures
:
lock.lock();
Future<Foo> future;
try {
future = allFutures.get(key);
if (future == null) {
future = executorService.submit(new Callable<Foo>() {
public Foo call() {
return createFooExpensively(key);
}
});
allFutures.put(key, future);
}
} finally {
lock.unlock();
}
try {
return future.get();
} catch (InterruptedException e) {
throw new MyRuntimeException(e);
} catch (ExecutionException e) {
throw new MyRuntimeException(e);
}
But this seems... a little hacky, for two reasons:
Map
is fully populated, we still go through Future.get()
to get
the results. I expect this is pretty cheap, but it's ugly.What I'd like is to replace cache
with a Map
that will block gets for a given key until that key has a value, but allow other gets meanwhile. Does any such thing exist? Or does someone have a cleaner alternative to the Map
of Futures
?
A Map is an object that maps keys to values. A map cannot contain duplicate keys: Each key can map to at most one value. It models the mathematical function abstraction.
HashMap can be one of the best data structures for mapping multiple keys to the same value.
We can also use Apache Commons Collection, which provides an efficient map implementation MultiKeyMap that maps multiple keys to a value. MultiKeyMap provides get, containsKey, put, and remove for individual keys.
Creating a lock per key sounds tempting, but it may not be what you want, especially when the number of keys is large.
As you would probably need to create a dedicated (read-write) lock for each key, it has impact on your memory usage. Also, that fine granularity may hit a point of diminishing returns given a finite number of cores if concurrency is truly high.
ConcurrentHashMap is oftentimes a good enough solution in a situation like this. It provides normally full reader concurrency (normally readers do not block), and updates can be concurrent up to the level of concurrency level desired. This gives you pretty good scalability. The above code may be expressed with ConcurrentHashMap like the following:
ConcurrentMap<Key,Foo> cache = new ConcurrentHashMap<>();
...
Foo result = cache.get(key);
if (result == null) {
result = createFooExpensively(key);
Foo old = cache.putIfAbsent(key, result);
if (old != null) {
result = old;
}
}
The straightforward use of ConcurrentHashMap does have one drawback, which is that multiple threads may find that the key is not cached, and each may invoke createFooExpensively(). As a result, some threads may do throw-away work. To avoid this, you would want to use the memoizer pattern that's mentioned in "Java Concurrency in Practice".
But then again, the nice folks at Google already solved these problems for you in the form of CacheBuilder:
LoadingCache<Key,Foo> cache = CacheBuilder.newBuilder().
concurrencyLevel(32).
build(new CacheLoader<Key,Foo>() {
public Foo load(Key key) {
return createFooExpensively(key);
}
});
...
Foo result = cache.get(key);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With