I need thread safe and efficient LRU
cache implementation code.
below code is not thread safe. Is this code can be enhanced using ConcurrentHashMap
.
Thanks in advance.
private class LruCache<A, B> extends LinkedHashMap<A, B> {
private final int maxEntries;
public LruCache(final int maxEntries) {
super(maxEntries + 1, 1.0f, true);
this.maxEntries = maxEntries;
}
@Override
protected boolean removeEldestEntry(final Map.Entry<A, B> eldest) {
return super.size() > maxEntries;
}
}
The core concept of the LRU algorithm is to evict the oldest data from the cache to accommodate more data. To implement an LRU cache we use two data structures: a hashmap and a doubly linked list. A doubly linked list helps in maintaining the eviction order and a hashmap helps with O(1) lookup of cached keys.
A simple LRU Cache implementation uses a doubly linked list; adding new items to the head, removing items from the tail, and moving any existing items to the head when referenced (touched). This algorithm is good for single threaded applications but becomes very slow in a multi-threaded environment.
Implementing LRU Cache in Java Queue: Using a doubly-linked list, one can implement a queue where the max size of the queue will be equal to the cache size (the total number of frames that exist).
lru_cache is a thread-safe LRU cache.
Found Guava's Cache. Haven't used it myself.
A Cache is similar to ConcurrentMap, but not quite the same.
Source: https://github.com/google/guava/wiki/CachesExplained
Example:
LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder()
.expireAfterAccess(10, TimeUnit.MINUTES)
.maximumSize(1000)
.build(
new CacheLoader<Key, Graph>() {
public Graph load(Key key) { // no checked exception
return createExpensiveGraph(key);
}
});
...
return graphs.getUnchecked(key);
The best you can do is to make it thread-safe is to wrap it with Collections.synchronizedMap(map)
as explained in the javadoc:
Note that this implementation is not synchronized. If multiple threads access a linked hash map concurrently, and at least one of the threads modifies the map structurally, it must be synchronized externally. This is typically accomplished by synchronizing on some object that naturally encapsulates the map. If no such object exists, the map should be "wrapped" using the
Collections.synchronizedMap
method. This is best done at creation time, to prevent accidental unsynchronized access to the map:
Map m = Collections.synchronizedMap(new LinkedHashMap(...));
However it is not enough to make it fully thread-safe you sill need to protect any iteration over the content of the map using the instance of the wrapped map as object's monitor:
Map m = Collections.synchronizedMap(map);
...
Set s = m.keySet(); // Needn't be in synchronized block
...
synchronized (m) { // Synchronizing on m, not s!
Iterator i = s.iterator(); // Must be in synchronized block
while (i.hasNext())
foo(i.next());
}
This is pretty much all you can easily do with what we have out of the box in the JDK
, if you want something thread-safe and more efficient, you should rather look at Cache
from Google Guava.
Here is an example of a LRU
cache with a max size of 2
built with guava
:
ConcurrentMap<String, String> cache =
CacheBuilder.newBuilder()
.maximumSize(2L)
.<String, String>build().asMap();
cache.put("a", "b");
cache.put("b", "c");
System.out.println(cache);
cache.put("a", "d");
System.out.println(cache);
cache.put("c", "d");
System.out.println(cache);
Output:
{b=c, a=b}
{b=c, a=d}
{c=d, a=d}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With