Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Mongodb terminates when it runs out of memory

I have the following configuration:

  • a host machine that runs three docker containers:
    • Mongodb
    • Redis
    • A program using the previous two containers to store data

Both Redis and Mongodb are used to store huge amounts of data. I know Redis needs to keep all its data in RAM and I am fine with this. Unfortunately, what happens is that Mongo starts taking up a lot of RAM and as soon as the host RAM is full (we're talking about 32GB here), either Mongo or Redis crashes.

I have read the following previous questions about this:

  1. Limit MongoDB RAM Usage: apparently most RAM is used up by the WiredTiger cache
  2. MongoDB limit memory: here apparently the problem was log data
  3. Limit the RAM memory usage in MongoDB: here they suggest to limit mongo's memory so that it uses a smaller amount of memory for its cache/logs/data
  4. MongoDB using too much memory: here they say it's WiredTiger caching system which tends to use as much RAM as possible to provide faster access. They also state it's completely okay to limit the WiredTiger cache size, since it handles I/O operations pretty efficiently
  5. Is there any option to limit mongodb memory usage?: caching again, they also add MongoDB uses the LRU (Least Recently Used) cache algorithm to determine which "pages" to release, you will find some more information in these two questions
  6. MongoDB index/RAM relationship: quote: MongoDB keeps what it can of the indexes in RAM. They'll be swaped out on an LRU basis. You'll often see documentation that suggests you should keep your "working set" in memory: if the portions of index you're actually accessing fit in memory, you'll be fine.
  7. how to release the caching which is used by Mongodb?: same answer as in 5.

Now what I appear to understand from all these answers is that:

  1. For faster access it would be better for Mongo to fit all indices in RAM. However, in my case, I am fine with indices partially residing on disk as I have a quite fast SSD.
  2. RAM is mostly used for caching by Mongo.

Considering this, I was expecting Mongo to try and use as much RAM space as possible but being able to function also with few RAM space and fetching most things from disk. However, I limited Mongo Docker container's memory (to 8GB for instance), by using --memory and --memory-swap, but instead of fetching stuff from disk, Mongo just crashed as soon as it ran out of memory.

How can I force Mongo to use only the available memory and to fetch from disk everything that does not fit into memory?

like image 403
Simone Bronzini Avatar asked Mar 07 '23 10:03

Simone Bronzini


1 Answers

Thanks to @AlexBlex's comment I solved my issue. Apparently the problem was that Docker limited the container's RAM to 8GB but the wiredTiger storage engine was still trying to use up 50% - 1GB of the total system RAM for it's cache (which in my case would have been 15 GB).

Capping wiredTiger's cache size by using this configuration option to a value less than what Docker was allocating solved the problem.

like image 122
Simone Bronzini Avatar answered Mar 10 '23 11:03

Simone Bronzini