On an ASP.NET MVC project we have several instances of data that requires good amount of resources and time to build. We want to cache them.
MemoryCache
provides certain level of thread-safety but not enough to avoid running multiple instances of building code in parallel. Here is an example:
var data = cache["key"]; if(data == null) { data = buildDataUsingGoodAmountOfResources(); cache["key"] = data; }
As you can see on a busy website hundreds of threads could go inside the if statement simultaneously until the data is built and make the building operation even slower, unnecessarily consuming the server resources.
There is an atomic AddOrGetExisting
implementation in MemoryCache but it incorrectly requires "value to set" instead of "code to retrieve the value to set" which I think renders the given method almost completely useless.
We have been using our own ad-hoc scaffolding around MemoryCache to get it right however it requires explicit lock
s. It's cumbersome to use per-entry lock objects and we usually get away by sharing lock objects which is far from ideal. That made me think that reasons to avoid such convention could be intentional.
So I have two questions:
Is it a better practice not to lock
building code? (That could have been proven more responsive for one, I wonder)
What's the right way to achieve per-entry locking for MemoryCache for such a lock? The strong urge to use key
string as the lock object is dismissed at ".NET locking 101".
Cache memory is a chip-based computer component that makes retrieving data from the computer's memory more efficient. It acts as a temporary storage area that the computer's processor can retrieve data from easily.
MemoryCache does not allow you to share memory between processes as the memory used to cache objects is bound to the application pool. That's the nature of any in-memory cache implementation you'll find. The only way to actually use a shared cache is to use a distributed cache.
Accelerating online database applications is the most common use case for in-memory caching. For example, a high-traffic website storing content in a database will significantly benefit from the in-memory cache.
Using a distributed cache offloads the cache memory to an external process. The in-memory cache can store any object. The distributed cache interface is limited to byte[] . The in-memory and distributed cache store cache items as key-value pairs.
A distributed cache is a cache shared by multiple app servers, typically maintained as an external service to the app servers that access it. A distributed cache can improve the performance and scalability of an ASP.NET Core app, especially when the app is hosted by a cloud service or a server farm.
We solved this issue by combining Lazy<T>
with AddOrGetExisting
to avoid a need for a lock object completely. Here is a sample code (which uses infinite expiration):
public T GetFromCache<T>(string key, Func<T> valueFactory) { var newValue = new Lazy<T>(valueFactory); // the line belows returns existing item or adds the new value if it doesn't exist var value = (Lazy<T>)cache.AddOrGetExisting(key, newValue, MemoryCache.InfiniteExpiration); return (value ?? newValue).Value; // Lazy<T> handles the locking itself }
That's not complete. There are gotchas like "exception caching" so you have to decide about what you want to do in case your valueFactory throws exception. One of the advantages, though, is the ability to cache null values too.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With