Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Memcached eviction prior to key expiry?

Tags:

Can a key/value pair stored in memcached get evicted prior to its expiry if there is still free space available?

I have a memcached process running that is allowed to consume up to 6GB; 2.5GB are reported in use and that number fluctuates only minimally (+/- 100MB in a one-day span). If I set a simple string value that expiries in 15 minutes, is it possible that it would be evicted (cache.get returns not found) prior to 15 minutes elapsing?

Thanks, -Eric

like image 628
esilver Avatar asked Jul 29 '11 03:07

esilver


People also ask

What is memcached eviction?

Memcached servers in a Memcached memory pool are unaware of each other. The servers track which data in their local array was least-recently used, and will purge that data as needed to make room for newer requests (called an eviction).

What is expiration and eviction?

Eviction is intended to control the total size of the cache. This is to prevent each cluster member from an OutOfMemory issue, it will not remove the entry from the cluster if configured correct. Expiration is intended for a per-entry approach.

What causes cache eviction?

Cache eviction is a feature where file data blocks in the cache are released when fileset usage exceeds the fileset soft quota, and space is created for new files. The process of releasing blocks is called eviction. However, file data is not evicted if the file data is dirty.


2 Answers

yes

Basically, memcache allocates space in chuncks vs on-demand, and then stores items into the chunks and manages that memory manually. As a result, smaller items can "use" much larger pieces of memory than they would if space was allocated on a per-item basis.

The link explains it much better than I can

https://groups.google.com/group/memcached/browse_thread/thread/8f460034418262e7?pli=1

Edit: adding more explanation

Memcache works by allocating slabs of various sizes. These slabs have a number of specifically sized slots (which is determined by the slab's class).

Hypothetically (and using only my abstraction of Memcache's internals), lets say the smallest size slab class was 1K. This means that the smallest slots are 1K. Furthermore, Memcache will only allocate these in sets of 1024, or 1MB of memory at a time. Lets say we had such a configuration and we want to store a 1-byte object (char value?) into Memcache. Lets suppose this would require 5 bytes of memory (4 byte key?). In an empty cache, Memcache would allocate a new slab of the smallest size that can hold the value (1K slots). So storing your 5 bytes will cause Memcache to allocate 1MB of memory.

Now, let say you have a lot of these. The next 1023 will be "free" -- Memcache has already allocated the memory, so no additional memory is needed. At the end of this, you've stored 1024 * 5 bytes = ~5KB, but Memcache has used 1MB to store this. Store a few million of these, and you can imagine consuming gigabytes of memory to store kilobytes of data.

This is close to a worst case. In practice Memcache can be configured to have a minimum slab class size quite small if needed, and the growth factor (size difference between the slab-classes) can be widened or narrowed. If you're caching database queries, you might have items sized from a few bytes to several KB, with page content you could even get into the MB.

Here's the key point Memcache won't reclaim memory or clean up slabs (new versions do have this now for a pretty significant performance hit, but traditionally, this has been how Memcache works).

Suppose you have a system that has been happily running and caching for a few days. You have hundreds of slabs of various sizes. You deploy a new page-caching strategy to your app without resetting the cache. Now instead of caching whole pages, you're caching parts of the page. You've changed your caching pattern from storing lots of ~1MB objects to storing lots of ~10KB objects. Here's where we get into trouble. Memcache has allocated a bunch of slabs that hold objects of about 1MB. You never used to cache many 10KB objects before. The slabs that have 10KB slots are quickly filled up, but now you have a whole bunch of allocated slabs that hold objects of 1MB which aren't being used (nothing else is that big). Memcache won't put your 10KB objects in a 1MB slot (even if it did, it wouldn't help for very long). It needs to get more slabs that hold 10KB objects, but it can't because all your memory has been allocated into the slabs that hold 1MB objects. The result is that you are left with potentially gigabytes of memory allocated in slabs to hold 1MB objects which sit idle while your 10KB-slot slabs are full. In this scenario, you will start evicting items out of the 10KB-slot slabs despite have gigabytes sitting idle.

This was a long-winded, contrived, and extreme example. Rarely does your caching strategy change so obviously or so dramatically. The default growth factor of slab-classes is 1.25, so you'd have slabs with 1KB slots, 1.25KB slots, 1.5KB slots, etc. The concept holds -- if you are heavily using certain sized slabs and that pattern shifts (sql queries return more objects? web pages get bigger? add a column to a table which moves a cached response up a slab class? etc.) Then you can end up with a bunch of slabs which are the "wrong" size and you can have "nowhere" to store something despite having gigabytes of "unused" space.

If you are getting evictions, it's possible to telnet into Memcache and find out what slabs are causing the evictions. Usually, a cache-reset (yeah, empty out everything) fixes the issue. Here's a reference on how to get at the stats. http://lzone.de/articles/memcached.htm

like image 161
John Hinnegan Avatar answered Oct 27 '22 00:10

John Hinnegan


Memcached stores data according to slabs of different memory chunks. If the different memory chunks are already allocated, then the Least recently used algorithm runs on the slab and evicts the data out, even if the there are no data in other memory slabs.

Therefore a large distribution of data sizes can be responsible for this problem. By having multiple instances of memcached running and using it as a distributed system, the issue can be reduced.

like image 43
apatel Avatar answered Oct 27 '22 01:10

apatel