Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Redis cache vs using memory directly

People also ask

Is Redis faster than in-memory cache?

Redis is a remote data structure server. It is certainly slower than just storing the data in local memory (since it involves socket roundtrips to fetch/store the data).

What's the advantage of Redis vs using memory?

Both Redis and Memcached are in-memory, open-source data stores. Memcached, a high-performance distributed memory cache service, is designed for simplicity while Redis offers a rich set of features that make it effective for a wide range of use cases.

Does Redis use a lot of memory?

An empty instance uses ~ 3MB of memory. 1 Million small Keys -> String Value pairs use ~ 85MB of memory. 1 Million Keys -> Hash value, representing an object with 5 fields, use ~ 160 MB of memory.

Is Redis in-memory or distributed cache?

Redis is an in-memory data store that is most often used as a distributed cache. It offers a variety of efficient data structures designed to allow brutally fast access to your data.


Redis is a remote data structure server. It is certainly slower than just storing the data in local memory (since it involves socket roundtrips to fetch/store the data). However, it also brings some interesting properties:

  • Redis can be accessed by all the processes of your applications, possibly running on several nodes (something local memory cannot achieve).

  • Redis memory storage is quite efficient, and done in a separate process. If the application runs on a platform whose memory is garbage collected (node.js, java, etc ...), it allows handling a much bigger memory cache/store. In practice, very large heaps do not perform well with garbage collected languages.

  • Redis can persist the data on disk if needed.

  • Redis is a bit more than a simple cache: it provides various data structures, various item eviction policies, blocking queues, pub/sub, atomicity, Lua scripting, etc ...

  • Redis can replicate its activity with a master/slave mechanism in order to implement high-availability.

Basically, if you need your application to scale on several nodes sharing the same data, then something like Redis (or any other remote key/value store) will be required.


Currently we are more attracted in server less architecture where each request can go to different container.In this case redis can play very important role.

We can't use simple cache in server less as we can't be sure our request gets served at same container where our simple cache is stored.

In this case, we have to use redis as it stores cache at remote location & we can access that even container changes in server less architecture.