I am running code which after sometimes hours, sometimes minutes fails with the error
OSError: [Errno 24] Too many open files
And I have real trouble debugging this. The error itself is always triggered by the marked line in the code snippet below
try:
with open(filename, 'rb') as f:
contents = f.read() <----- error triggered here
except OSError as e:
print("e = ", e)
raise
else:
# other stuff happens
However, I can't see any problem in this part of the code (right?) so I guess that other parts of the code don't close files properly. However, while I do open files quite a bit, I always open them with the 'with' statement, and my understanding is that even if an error occurs the files will be closed (right?). So another part of my code looks like this
try:
with tarfile.open(filename + '.tar') as tar:
tar.extractall(path=target_folder)
except tarfile.ReadError as e:
print("e = ", e)
except OSError as e:
print("e = ", e)
else:
# If everything worked, we are done
return
The code above does run into a ReadError quite frequently, but even if that happens, the file should be closed, right? So I just don't understand how I can run into too many open files? Sorry this is not reproducible for you, since I can't debug it enough, I just fishing for some tips here, since I am lost. Any help is appreciated...
Edit: I am on a macbook. Here is the output of ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1418
virtual memory (kbytes, -v) unlimited
Following the suggestion by @sj95126 I changed the code concerning the tar file to something which ensures that the file is closed
try:
tar = tarfile.open(filename + '.tar')
tar.extractall(path=target_folder)
except tarfile.ReadError as e:
print("tarfile.ReadError e = ", e)
except OSError as e:
print("e = ", e)
else:
# If everything worked, we are done
return
finally:
print("close tar file")
try:
tar.close()
except:
print("file already closed")
but it did not solve the problem.
on unix/linux systems there is a command with which you can check the total number of file locks or open files limit using ulimit -a. in @carl's situation the output was:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1418
virtual memory (kbytes, -v) unlimited
as you can see the open files or file locks is equal to 256:
open files (-n) 256
which is a very small value
@carl's archive was at least containing more than 256 files; so python was opening each file using a file handler, which then results in a system file lock (in order to open a file on a system you need a file lock, like a pointer to that file; to access data, do whatever you want)
the solution is to make open files value to unlimited or to a very big number.
according to this stack answer this is how to you can change the limit
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With