Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

TCP Send does not return cause crashing process

If a tcp server and client are connected, I'd like to determine when the client is no longer connected. I thought I can simply do this by attempting to send a message to the client and once send() returns with a -1, I can then tear down the socket. This implementation works find on Windows but the minute I try doing this on Linux with BSD sockets, the call to send() on the server side app causes my server app to crash if the client is no longer connected. It doesn't even return a -1...just terminates the program.

Please explain why this is happening. Thanks in advance!

like image 760
Danny Avatar asked Aug 25 '09 19:08

Danny


2 Answers

This is caused by the SIGPIPE signal. See send(2):

The send() function shall fail if:
[EPIPE] The socket is shut down for writing, or the socket is connection-mode and is no longer connected. In the latter case, and if the socket is of type SOCK_STREAM or SOCK_SEQPACKET and the MSG_NOSIGNAL flag is not set, the SIGPIPE signal is generated to the calling thread.

You can avoid this by using the MSG_NOSIGNAL flag on the send() call, or by ignoring the SIGPIPE signal with signal(SIGPIPE, SIG_IGN) at the beginning of your program. Then the send() function will return -1 and set errno to EPIPE in this situation.

like image 175
mark4o Avatar answered Nov 12 '22 06:11

mark4o


You need to ignore the SIGPIPE signal. If a write error happens on a socket, your process with get SIGPIPE, and the default behavior of that signal is to kill your process. Writing networking code on *nix you usually want:

signal(SIGPIPE,SIG_IGN);
like image 35
nos Avatar answered Nov 12 '22 07:11

nos