System.Runtime.Caching.MemoryCache is a class in the .NET Framework (version 4+) that caches objects in-memory, using strings as keys. More than System.Collections.Generic.Dictionary<string, object>, this class has all kinds of bells and whistles that let you configure how much big the cache can grow to (in either absolute or relative terms), set different expiration policies for different cache items, and so much more.
My questions relate to the memory limits. None of the docs on MSDN seem to explain this satisfactorily, and the code on Reference Source is fairly opaque. Sorry about piling all of this into one SO "question", but I can't figure out how to take some out into their own questions, because they're really just different views of one overall question: "how do you reconcile idiomatic C#/.NET with the notion of a generally useful in-memory cache that has configurable memory limits that's implemented nearly entirely in managed code?"
Player
. Player
has some player-specific state that's encapsulated in a public PlayerStateData PlayerState { get; }
property that encapsulates what direction the player is looking, how many sprockets they're holding, etc., as well as a reference to the entire game's state public GameStateData GameState { get; }
that can be used to get back to the game's (much larger) state from a method that only knows about a player.PlayerState
and GameState
when considering the size of the contribution to the cache?GameState
's contribution to the limit by 5 just because 5 players are cached... but then again, a likely implementation might do just that, and it's difficult to count PlayerState
without counting GameState
.My own research led me to SRef.cs, which I gave up on trying to understand after getting here, which later leads here. Guessing the answers to all these questions would revolve around finding and meditating on the code that ultimately populated the INT64 that's stored in that handle.
An in-memory cache removes the performance delays when an application built on a disk-based database must retrieve data from a disk before processing. Reading data from memory is faster than from the disk. In-memory caching avoids latency and improves online application performance.
How Does Memory Caching Work? Memory caching works by first setting aside a portion of RAM to be used as the cache. As an application tries to read data, typically from a data storage system like a database, it checks to see if the desired record already exists in the cache.
In-Memory Cache is used for when you want to implement cache in a single process. When the process dies, the cache dies with it. If you're running the same process on several servers, you will have a separate cache for each server. Persistent in-process Cache is when you back up your cache outside of process memory.
In modern computers, the cache memory is stored between the processor and DRAM; this is called Level 2 cache. On the other hand, Level 1 cache is internal memory caches which are stored directly on the processor.
I know this is late but I've done a lot of digging in the source code to try to understand what is going on and I have a fairly good idea now. I will say that MemoryCache is the worst documented class on MSDN, which kind of baffles me for something intended to be used by people trying to optimize their applications.
MemoryCache uses a special "sized reference" to measure the size of objects. It all looks like a giant hack in the memory cache source code involving reflection to wrap an internal type called "System.SizedReference", which from what I can tell causes the GC to set the size of the object graph it points to during gen 2 collections.
From my testing, this WILL include the size of parent objects, and thus all child objects referenced by the parent etc, BUT I've found that if you make references to parent objects weak references (i.e. via WeakReference
or WeakReference<>
) then it is no longer counted as part of the object graph, so that is what I do for all cache objects now.
I believe cache objects need to be completely self-contained or use weak references to other objects for the memory limit to work at all.
If you want to play with it yourself, just copy the code from SRef.cs
, create an object graph and point a new SRef instance to it, and then call GC.Collect. After the collection the approximate size will be set to the size of the object graph.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With