I know it's simple to implement, but I want to reuse something that already exist.
Problem I want to solve is that I load configuration (from XML so I want to cache them) for different pages, roles, ... so the combination of inputs can grow quite much (but in 99% will not). To handle this 1%, I want to have some max number of items in cache...
Till know I have found org.apache.commons.collections.map.LRUMap in apache commons and it looks fine but want to check also something else. Any recommendations?
To implement an LRU cache we use two data structures: a hashmap and a doubly linked list. A doubly linked list helps in maintaining the eviction order and a hashmap helps with O(1) lookup of cached keys.
A Least Recently Used (LRU) Cache organizes items in order of use, allowing you to quickly identify which item hasn't been used for the longest amount of time.
You can use a LinkedHashMap (Java 1.4+) :
// Create cache final int MAX_ENTRIES = 100; Map cache = new LinkedHashMap(MAX_ENTRIES+1, .75F, true) { // This method is called just after a new entry has been added public boolean removeEldestEntry(Map.Entry eldest) { return size() > MAX_ENTRIES; } }; // Add to cache Object key = "key"; cache.put(key, object); // Get object Object o = cache.get(key); if (o == null && !cache.containsKey(key)) { // Object not in cache. If null is not a possible value in the cache, // the call to cache.contains(key) is not needed } // If the cache is to be used by multiple threads, // the cache must be wrapped with code to synchronize the methods cache = (Map)Collections.synchronizedMap(cache);
This is an old question, but for posterity I wanted to list ConcurrentLinkedHashMap, which is thread safe, unlike LRUMap. Usage is quite easy:
ConcurrentMap<K, V> cache = new ConcurrentLinkedHashMap.Builder<K, V>() .maximumWeightedCapacity(1000) .build();
And the documentation has some good examples, like how to make the LRU cache size-based instead of number-of-items based.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With