I tried to import dump of a big database to my local instance of mongodb.
Unfortunatelly I found a problem that for one of imported collection, mongo threw exception, that too many files are open.
I went throught the mighty knowledge of the internet using google and I found some solutions with ulimit and launchctl, but they didn't work.
Finally I resolved the problem in the following way:
limit maxproc 512 1024 limit maxfiles 16384 32768
sudo sysctl -w kern.maxfilesperproc=16384 sudo sysctl -w kern.maxfiles=32768
The issue doesn't occur anymore, but I have question. If there is some solution to limit the number of opened files from the mongorestore level ? I don't think that increasing a global value for max opened files is a good way.
Indeed --numParallelCollections=1
fixed this for me on OS-X, without modifying system settings. I was able to do a full DB restore which previously didn't complete.
However it seems to max out the connection pool as I still get
2017-06-01T16:55:19.386+0800 E NETWORK [initandlisten] Out of file descriptors. Waiting one second before trying to accept more connections.
when trying to continue. required to restart mongod
It's much safer to change the limit only on the current shell session. Adding it to your bash profile is also an option if you need to have it on every bash console session.
ulimit -S -n 2048
your_whatever_greedy_command...
You may modify 2048 with another value. If your greedy command is starting mongo, it will be:
ulimit -S -n 2048 && mongod
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With