Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

alternative to memcached that can persist to disk

I am currently using memcached with my java app, and overall it's working great.

The features of memcached that are most important to me are:

  • it's fast, since reads and writes are in-memory and don't touch the disk
  • it's just a key/value store (since that's all my app needs)
  • it's distributed
  • it uses memory efficiently by having each object live on exactly one server
  • it doesn't assume that the objects are from a database (since my objects are not database objects)

However, there is one thing that I'd like to do that memcached can't do. I want to periodically (perhaps once per day) save the cache contents to disk. And I want to be able to restore the cache from the saved disk image.

The disk save does not need to be very complex. If a new key/value is added while the save is taking place, I don't care if it's included in the save or not. And if an existing key/value is modified while the save is taking place, the saved value should be either the old value or the new value, but I don't care which one.

Can anyone recommend another caching solution (either free or commercial) that has all (or a significant percentage) of the memcached features that are important to me, and also allows the ability to save and restore the entire cache from disk?

like image 514
Mike W Avatar asked Aug 22 '09 19:08

Mike W


2 Answers

I have never tried it, but what about redis ?
Its homepage says (quoting) :

Redis is a key-value database. It is similar to memcached but the dataset is not volatile, and values can be strings, exactly like in memcached, but also lists and sets with atomic operations to push/pop elements.

In order to be very fast but at the same time persistent the whole dataset is taken in memory and from time to time and/or when a number of changes to the dataset are performed it is written asynchronously on disk. You may lost the last few queries that is acceptable in many applications but it is as fast as an in memory DB (Redis supports non-blocking master-slave replication in order to solve this problem by redundancy).

It seems to answer some points you talked about, so maybe it might be helpful, in your case?

If you try it, I'm pretty interested in what you find out, btw ;-)


As a side note : if you need to write all this to disk, maybe a cache system is not really what you need... after all, if you are using memcached as a cache, you should be able to re-populate it on-demand, whenever it is necessary -- still, I admit, there might be some performance problems if you whole memcached cluster falls at once...

So, maybe some "more" key/value store oriented software could help? Something like CouchDB, for instance?
It will probably not be as fast as memcached, as data is not store in RAM, but on disk, though...

like image 73
Pascal MARTIN Avatar answered Oct 14 '22 18:10

Pascal MARTIN


Maybe your problem is like mine: I have only a few machines for memcached, but with lots of memory. Even if one of them fails or needs to be rebooted, it seriously affects the performance of the system. According to the original memcached philosophy I should add a lot more machines with less memory for each, but that's not cost-efficient and not exactly "green IT" ;)

For our solution, we built an interface layer for the Cache system so that the providers to the underlying cache systems can be nested, like you can do with streams, and wrote a cache provider for memcached as well as our own very simple Key-Value-2-disk storage provider. Then we define a weight for cache items that represent how costly it is to rebuild an item if it cannot be retrieved from cache. The nested Disk cache is only used for items with a weight above a certain threshold, maybe around 10% of all items.

When storing an object in the cache, we won't lose time as saving to one or both caches is queued for asynchronous execution anyway. So writing to the disk cache doesn't need to be fast. Same for reads: First we go for memcached, and only if it's not there and it is a "costly" object, then we check the disk cache (which is by magnitudes slower than memcached, but still so much better then recalculating 30 GB of data after a single machine went down).

This way we get the best from both worlds, without replacing memcached by anything new.

like image 32
realMarkusSchmidt Avatar answered Oct 14 '22 18:10

realMarkusSchmidt