I'm porting a windows network application to linux and faced a timeout problem with select call on linux. The following function blocks for the entire timeout value and returns while I checked with a packet sniffer that client has already sent the data.
int recvTimeOutTCP( SOCKET socket, long sec, long usec )
{
struct timeval timeout;
fd_set fds;.
timeout.tv_sec = sec;
timeout.tv_usec = usec;
FD_ZERO( &fds );
FD_SET( socket, &fds );
// Possible return values:
// -1: error occurred
// 0: timed out
// > 0: data ready to be read
cerr << "Waiting on fd " << socket << endl;
return select(1, &fds, 0, 0, &timeout);
}
The sets are typically implemented as bit vectors, so select will scan over the vector to see what fds are selected. As an optimization, you pass in the number of fds to scan up to so that select doesn't have to look at all fds up to FD_SETSIZE (which might not even be the same across compilation units).
select() timeoutThe last argument taken by select() allows us to specify a timeout. It expects a pointer to struct timeval . The timeval structure is declared as follows: struct timeval { long tv_sec; long tv_usec;} tv_sec holds the number of seconds, and tv_usec holds the number of microseconds (1,000,000th second).
I think the first parameter to select()
should be socket+1
.
You really should use another name as socket
also is used for other things. Usually sock
is used.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With