I have this following scenario.
I create a pipe.
Forked a child process.
Child closes read end of the pipe explicitly and writes into the write end of the pipe and exits without closing anything ( exit should close all open file/pipe descriptors on behalf of the child, I presume).
Parent closes the write end of the pipe explicitly and reads from the read end of the pipe using fgets
until fgets
returns NULL. ie it reads completely.
Now my question is, why does the parent need to close the read end of the pipe explicitly once its done reading? Isn't it wise for the system to delete the pipe altogether once complete data has been read from the read-end?
I dint close the read end explicitly in the parent and I have Too many file descriptors
error sooner or later while opening more pipes. My assumption was that the system automatically deletes a pipe once its write end is closed and data has been completely read from read end. Cos you cant from a pipe twice!
So, whats the rationale behind the system not deleting the pipe once data has been completely read and write end closed?
If the reading process doesn't close the write end of the pipe, then after the other process closes its write descriptor, the reader won't see end-of-file, even after it has read all data from the pipe.
Further, if you overfill a pipe and there is still a process with the read end open (even if that process is the one trying to write), then the write will hang, waiting for the reader to make space for the write to complete.
If the parent process does not close the write-end of the pipe, then the child will block forever in the call to read waiting more more data. (The read call will block if there's any open file descriptor associated with the write-end of the pipe.) The parent will then block forever in the call to wait .
pipe() creates an internal system buffer and two file descriptors, one for reading and one for writing. After the pipe call, the parent and child should close the file descriptors for the opposite direction. Leaving them open does not permit full-duplex communication.
You're correct that the system will close the write end of the pipe once the child exits. However there could be another write end of that pipe open, if the child fork
s or passes a duplicate of the write end to another process.
It is still true that the system would be able to tell when all the descriptors at one end of a pipe have been closed (either explicitly or because the owning process exited). It still doesn't make sense to close those on the other end of the pipe, as that would lead to confusion when the parent process tries to close the descriptor on its end of the pipe; either:
From the point of view of the system, it might well have discarded the pipe once all the descriptors at one end have been closed, so you don't need to worry about inefficiency there. What matters more is that the user space process should have a consistent experience, which means not closing the descriptor unless it is specifically requested.
File descriptors are not closed by the system, until the process exits. This is true for pipes, as well as any other file descriptor.
There's a big difference between a pipe (or any other file) with no data in it and a closed file descriptor.
When a file descriptor is closed, the system can reuse its number for a new file descriptor. Then, when you read, you get something else. So after you've closed a file descriptor, you must no longer use it.
Now imagine that once there's no more data, the system would automatically close the file descriptor. This would make the number available for reuse, and a subsequent unrelated open may get it. Now the reader, who doesn't know yet that there's no more data, will read from what it thinks is the pipe, but will actually read from another file.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With