Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Choosing redis maxmemory size and BGSAVE memory usage

Tags:

memory

redis

I am trying to find out what a safe setting for 'maxmemory' would be in the following situation:

  • write-heavy application
  • 8GB RAM
  • let's assume other processes take up about 1GB
  • this means that the redis process' memory usage may never exceed 7GB
  • memory usage doubles on every BGSAVE event, because:

In the redis docs the following is said about the memory usage increasing on BGSAVE events:

If you are using Redis in a very write-heavy application, while saving an RDB file on disk or rewriting the AOF log Redis may use up to 2 times the memory normally used.

  • the maxmemory limit is roughly compared to 'used_memory' from redis-cli INFO (as is explained here) and does not take other memory used by redis into account

Am I correct that this means that the maxmemory setting should, in this situation, be set no higher than (8GB - 1GB) / 2 = 3.5GB?

If so, I will create a pull request for the redis docs to reflect this more clearly.

like image 747
Matthijs van den Bos Avatar asked Feb 19 '13 13:02

Matthijs van den Bos


1 Answers

I would recommend in this case a limit of 3GB. Yes, the docs are pretty much correct and running a bgsave will double for a short term the memory requirements. However, I prefer to reserve 2GB of memory for the system, or at a maximum for a persisting master 40% of maximum memory.

You indicate you have a very write heavy application. In this case I would highly recommend a second server do the save operations. I've found during high writes and a bgsave the response time to the client(s) can get high. It isn't Redis per se causing it, but the response of the server itself. This is especially true for virtual machines. Under this setup you would use the second server to slave from the primary and save to disk while the first remains responsive.

like image 59
The Real Bill Avatar answered Oct 07 '22 19:10

The Real Bill