Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to set socket timeout in C when making multiple connections?

Tags:

c

linux

sockets

I'm writing a simple program that makes multiple connections to different servers for status check. All these connections are constructed on-demand; up to 10 connections can be created simultaneously. I don't like the idea of one-thread-per-socket, so I made all these client sockets Non-Blocking, and throw them into a select() pool.

It worked great, until my client complained that the waiting time is too long before they can get the error report when target servers stopped responding.

I've checked several topics in the forum. Some had suggested that one can use alarm() signal or set a timeout in the select() function call. But I'm dealing with multiple connections, instead of one. When a process wide timeout signal happens, I've no way to distinguish the timeout connection among all the other connections.

Is there anyway to change the system-default timeout duration ?

like image 786
RichardLiu Avatar asked Nov 15 '10 05:11

RichardLiu


People also ask

How do you set a socket timeout?

Java socket timeout Answer: Just set the SO_TIMEOUT on your Java Socket, as shown in the following sample code: String serverName = "localhost"; int port = 8080; // set the socket SO timeout to 10 seconds Socket socket = openSocket(serverName, port); socket.

What is difference between socket timeout and connection timeout?

connection timeout — a time period in which a client should establish a connection with a server. socket timeout — a maximum time of inactivity between two data packets when exchanging data with a server.

How does socket timeout work?

A socket timeout is dedicated to monitor the continuous incoming data flow. If the data flow is interrupted for the specified timeout the connection is regarded as stalled/broken. Of course this only works with connections where data is received all the time.

What is socket write timeout?

timeout - the socket timeout value passed to the Socket. setSoTimeout() method. The default on the server side is 60000 milliseconds. writeTimeout - a timeout value imposed on socket write operations. This feature is enabled by setting writeTimeout to a value, in milliseconds, greater than zero.


2 Answers

You can use the SO_RCVTIMEO and SO_SNDTIMEO socket options to set timeouts for any socket operations, like so:

    struct timeval timeout;           timeout.tv_sec = 10;     timeout.tv_usec = 0;          if (setsockopt (sockfd, SOL_SOCKET, SO_RCVTIMEO, &timeout,                 sizeof timeout) < 0)         error("setsockopt failed\n");      if (setsockopt (sockfd, SOL_SOCKET, SO_SNDTIMEO, &timeout,                 sizeof timeout) < 0)         error("setsockopt failed\n");      

Edit: from the setsockopt man page:

SO_SNDTIMEO is an option to set a timeout value for output operations. It accepts a struct timeval parameter with the number of seconds and microseconds used to limit waits for output operations to complete. If a send operation has blocked for this much time, it returns with a partial count or with the error EWOULDBLOCK if no data were sent. In the current implementation, this timer is restarted each time additional data are delivered to the protocol, implying that the limit applies to output portions ranging in size from the low-water mark to the high-water mark for output.

SO_RCVTIMEO is an option to set a timeout value for input operations. It accepts a struct timeval parameter with the number of seconds and microseconds used to limit waits for input operations to complete. In the current implementation, this timer is restarted each time additional data are received by the protocol, and thus the limit is in effect an inactivity timer. If a receive operation has been blocked for this much time without receiving additional data, it returns with a short count or with the error EWOULDBLOCK if no data were received. The struct timeval parameter must represent a positive time interval; otherwise, setsockopt() returns with the error EDOM.

like image 192
Toby Avatar answered Sep 19 '22 11:09

Toby


am not sure if I fully understand the issue, but guess it's related to the one I had, am using Qt with TCP socket communication, all non-blocking, both Windows and Linux..

wanted to get a quick notification when an already connected client failed or completely disappeared, and not waiting the default 900+ seconds until the disconnect signal got raised. The trick to get this working was to set the TCP_USER_TIMEOUT socket option of the SOL_TCP layer to the required value, given in milliseconds.

this is a comparably new option, pls see https://www.rfc-editor.org/rfc/rfc5482, but apparently it's working fine, tried it with WinXP, Win7/x64 and Kubuntu 12.04/x64, my choice of 10 s turned out to be a bit longer, but much better than anything else I've tried before ;-)

the only issue I came across was to find the proper includes, as apparently this isn't added to the standard socket includes (yet..), so finally I defined them myself as follows:

#ifdef WIN32     #include <winsock2.h> #else     #include <sys/socket.h> #endif  #ifndef SOL_TCP     #define SOL_TCP 6  // socket options TCP level #endif #ifndef TCP_USER_TIMEOUT     #define TCP_USER_TIMEOUT 18  // how long for loss retry before timeout [ms] #endif 

setting this socket option only works when the client is already connected, the lines of code look like:

int timeout = 10000;  // user timeout in milliseconds [ms] setsockopt (fd, SOL_TCP, TCP_USER_TIMEOUT, (char*) &timeout, sizeof (timeout)); 

and the failure of an initial connect is caught by a timer started when calling connect(), as there will be no signal of Qt for this, the connect signal will no be raised, as there will be no connection, and the disconnect signal will also not be raised, as there hasn't been a connection yet..

like image 37
wgr Avatar answered Sep 22 '22 11:09

wgr