I'm programming a server and a client using non blocking sockets (fd_sets
and select
function) and once the server closes or shuts down a client socket, the client starts receiving a lot of garbage until it crashes..
I've been warned that when working with select()
a socket would become readable when the connection was terminated, but how can I know in
if( FD_ISSET( socket, &read ) )
{
}
if the cause is just regular data or the connection has ended?
Thank you a lot!
If only one peer closes the socket it just communicates with the FIN that it will no longer send any data. It also communicates to the local OS that it is no longer willing to receive any data - here close(sock) differs from shutdown(sock,SHUT_WR) .
One way or another, if you don't close a socket, your program will leak a file descriptor. Programs can usually only open a limited number of file descriptors, so if this happens a lot, it may turn into a problem.
Note: All sockets should be closed before the end of your process.
The file descriptor sets wont tell you if the socket is closed, only that you may attempt to read from it. When the remote end closes the connection the socket will become "readable". When you attempt a recv()
the return value will be 0 indicating the socket is closed. Always check your return values.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With