A common server socket pattern on Linux/UNIX systems is to listen on a socket, accept a connection, and then fork()
to process the connection.
So, it seems that after you accept()
and fork()
, once you're inside the child process, you will have inherited the listening file descriptor of the parent process. I've read that at this point, you need to close the listening socket file descriptor from within the child process.
My question is, why? Is this simply to reduce the reference count of the listening socket? Or is it so that the child process itself will not be used by the OS as a candidate for routing incoming connections? If it's the latter, I'm a bit confused for two reasons:
(A) What tells the OS that a certain process is a candidate for accepting connections on a certain file descriptor? Is it the fact that the process has called accept()
? Or is it the fact that the process has called listen()
?
(B) If it's the fact that the process has called listen()
, don't we have a race condition here? What if this happens:
close(S)
, a second incoming connection goes to Child Process. accept()
(because it's not supposed to), so the incoming connection gets droppedWhat prevents the above condition from happening? And more generally, why should a child process close the listening socket?
If a process terminates, all its handles are closed implicitly. Therefore if a child closes the handle it inherits to the listening socket, the only handle remaining to that socket exists in the parent. Then the listening socket will be reclaimed when the parent terminates, or closes this handle explicitly.
If socket refers to an open TCP connection, the connection is closed. If a stream socket is closed when there is input data queued, the TCP connection is reset rather than being cleanly closed. The descriptor of the socket to be closed. Note: All sockets should be closed before the end of your process.
The listen() function applies only to stream sockets. It indicates a readiness to accept client connection requests, and creates a connection request queue of length backlog to queue incoming connection requests. Once full, additional connection requests are rejected.
No and no. The socket isn't put into listening mode until you call listen() . It must be listening in order to accept() . And, once you're listening, you cannot convert the socket to a connected socket.
Linux queues up pending connections. A call to accept
, from either the parent or child process, will poll that queue.
Not closing the socket in the child process is a resource leak, but not much else. The parent will still grab all the incoming connections, because it's the only one that calls accept
, but if the parent exits, the socket will still exist because it's open on the child, even if the child never uses it.
The incoming connection will be 'delivered' to which ever process is calling accept()
. After you forked before closing the file descriptor you could accept the connection in both processes.
So as long as you never accept any connections in the child thread and the parent is continuing to accept the connections everything would work fine.
But if you plan to never accept connections in your child process, why would you want to keep resources for the socket in this process?
The interesting question would be what happens if both processes call accept()
on the socket. I could not find definite information on this at the moment. What I could find is, that you can be sure, that every connection is only delivered to only one of these processes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With