After a close() syscall that fails with EINTR or EIO it is unspecified whether the file has been closed. (http://pubs.opengroup.org/onlinepubs/9699919799/) In multi-threaded applications, retrying the close may close unrelated files opened by other threads. Not retrying the close may result in unusable open file descriptors piling up. A clean solution might involve invoking fstat() on the freshly closed file descriptor and a quite complex locking mechanism. Also, serializing all open/close/accept/... invocations with a single mutex may be an option.
These solutions do not take into account that library functions may open and close files on their own in an uncontrollable way, e.g., some implementations of std::thread::hardware_concurrency() open files in the /proc filesystem.
File Streams as in the [file.streams] C++ standard section are not an option.
Is there a simple and reliable mechanism to close files in the presence of multiple threads?
edits:
Regular Files: While most of the time there will be no unusable open file descriptors accumulating, two conditions might trigger the problem: 1. Signals emitted at high frequency by some malware 2. Network file systems that lose connection before caches are flushed.
Sockets: According to Stevens/Fenner/Rudoff, if the socket option SO_LINGER is set on a file descriptor referring to a connected socket, and during a close(), the timer elapses before the FIN-ACK shutdown sequence completes, close() fails as part of the common procedure. Linux does not show this behavior, however, FreeBSD does, and also sets errno to EAGAIN. As I understand it, in this case, it is unspecified whether the file descriptor is invalidated. C++ code to test the behavior: http://www.longhaulmail.de/misc/close.txt The test code output there looks like a race condition in FreeBSD to me, if it's not, why not?
One might consider bocking signals during calls to close().
This issue has been fixed in POSIX for the next issue; unfortunately it's too big a change to have made it into the recent TC2. See the final accepted text for Austin Group Issue #529.
There's no practical solution for this problem as POSIX doesn't address this at all.
Not retrying the close may result in unusable open file descriptors piling up.
As much as it sounds like legitimate concern, I have never seen this happen due to failed close()
calls.
A clean solution might involve invoking
fstat()
on the freshly closed file descriptor and a quite complex locking mechanism.
Not really. When close()
failed, the state of the file descriptor is unspecified. So, you can't reliably use it a fstat()
call.
Because the file descriptor might have been closed already. In that case, you are passing an invalid file descriptor to fstat()
. Or another
thread might have reused it. In that case, you are passing the wrong file descriptor to fstat()
. Or the file descriptor might have been
corrupted by the failed close()
call.
When process exits, all the open descriptors will be flushed and closed anyway. So, this isn't much of a practical concern. One could argue that this would be a problem in a long running process in which close()
fails too often. But I have seen this happen in my experience and POSIX doesn't provide any alternative either.
Basically, you can't do much about this except report that the problem occurred.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With