I have the following configuration:
Both Redis and Mongodb are used to store huge amounts of data. I know Redis needs to keep all its data in RAM and I am fine with this. Unfortunately, what happens is that Mongo starts taking up a lot of RAM and as soon as the host RAM is full (we're talking about 32GB here), either Mongo or Redis crashes.
I have read the following previous questions about this:
it's completely okay to limit the WiredTiger cache size, since it handles I/O operations pretty efficiently
MongoDB uses the LRU (Least Recently Used) cache algorithm to determine which "pages" to release, you will find some more information in these two questions
MongoDB keeps what it can of the indexes in RAM. They'll be swaped out on an LRU basis. You'll often see documentation that suggests you should keep your "working set" in memory: if the portions of index you're actually accessing fit in memory, you'll be fine.
Now what I appear to understand from all these answers is that:
Considering this, I was expecting Mongo to try and use as much RAM space as possible but being able to function also with few RAM space and fetching most things from disk. However, I limited Mongo Docker container's memory (to 8GB for instance), by using --memory
and --memory-swap
, but instead of fetching stuff from disk, Mongo just crashed as soon as it ran out of memory.
How can I force Mongo to use only the available memory and to fetch from disk everything that does not fit into memory?
Thanks to @AlexBlex's comment I solved my issue. Apparently the problem was that Docker limited the container's RAM to 8GB
but the wiredTiger storage engine was still trying to use up 50% - 1GB
of the total system RAM for it's cache (which in my case would have been 15 GB
).
Capping wiredTiger's cache size by using this configuration option to a value less than what Docker was allocating solved the problem.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With