Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Too many open files while ensure index mongo

I would like to create text index on mongo collection. I write

db.test1.ensureIndex({'text':'text'})

and then i saw in mongod process

Sun Jan  5 10:08:47.289 [conn1] build index library.test1 { _fts: "text", _ftsx: 1 }
Sun Jan  5 10:09:00.220 [conn1]         Index: (1/3) External Sort Progress: 200/980    20%
Sun Jan  5 10:09:13.603 [conn1]         Index: (1/3) External Sort Progress: 400/980    40%
Sun Jan  5 10:09:26.745 [conn1]         Index: (1/3) External Sort Progress: 600/980    61%
Sun Jan  5 10:09:37.809 [conn1]         Index: (1/3) External Sort Progress: 800/980    81%
Sun Jan  5 10:09:49.344 [conn1]      external sort used : 5547 files  in 62 secs
Sun Jan  5 10:09:49.346 [conn1] Assertion: 16392:FileIterator can't open file: data/_tmp/esort.1388912927.0//file.233errno:24 Too many open files

I work on MaxOSX 10.9.1. Please help.

like image 531
Piotr Sobolewski Avatar asked Jan 05 '14 09:01

Piotr Sobolewski


3 Answers

I added a temporary ulimit -n 4096 before the restore command. also you can use mongorestore --numParallelCollections=1 ... and that seems to help. But still the connection pool seems to get exhausted.

like image 128
dcsan Avatar answered Nov 16 '22 00:11

dcsan


NB: This solution does/may not work with recent Mac OSs (comments indicate >10.13?). Apparently, changes have been made for security purposes.

Conceptually, the solution applies - following are a few sources of discussion:

  • https://wilsonmar.github.io/maximum-limits/
  • https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c
  • https://superuser.com/questions/433746/is-there-a-fix-for-the-too-many-open-files-in-system-error-on-os-x-10-7-1

--

I've had the same problem (executing a different operation, but still, a "Too many open files" error), and as lese says, it seems to be down to the 'maxfiles' limit on the machine running mongod.

On a mac, it is better to check limits with:

sudo launchctl limit

This gives you:

<limit name> <soft limit> <hard limit>
    cpu         unlimited      unlimited      
    filesize    unlimited      unlimited      
    data        unlimited      unlimited      
    stack       8388608        67104768       
    core        0              unlimited      
    rss         unlimited      unlimited      
    memlock     unlimited      unlimited      
    maxproc     709            1064           
    maxfiles    1024           2048  

What I did to get around the problem was to temporarily set the limit higher (mine was originally something like soft: 256, hard: 1000 or something weird like that):

sudo launchctl limit maxfiles 1024 2048

Then re-run the query/indexing operation and see if it breaks. If not, and to keep the higher limits (they will reset when you log out of the shell session you've set them on), create an '/etc/launchd.conf' file with the following line:

limit maxfiles 1024 2048

(or add that line to your existing launchd.conf file, if you already have one).

This will set the maxfile via launchctl on every shell at login.

like image 24
dpb Avatar answered Nov 16 '22 00:11

dpb


it may be related to this

try to check your system configuration issuing the following command in terminal

ulimit -a

like image 31
lese Avatar answered Nov 16 '22 00:11

lese