I'm using select()
on a Linux/ARM platform to see if a udp socket has received a packet. I'd like to know how much time was remaining in the select call if it returns before the timeout (having detected a packet).
Something along the lines of:
int wait_fd(int fd, int msec)
{
struct timeval tv;
fd_set rws;
tv.tv_sec = msec / 1000ul;
tv.tv_usec = (msec % 1000ul) * 1000ul;
FD_ZERO( & rws);
FD_SET(fd, & rws);
(void)select(fd + 1, & rws, NULL, NULL, & tv);
if (FD_ISSET(fd, &rws)) { /* There is data */
msec = (tv.tv_sec * 1000) + (tv.tv_usec / 1000);
return(msec?msec:1);
} else { /* There is no data */
return(0);
}
}
The safest thing is to ignore the ambiguous definition of select()
and time it yourself.
Just get the time before and after the select and subtract that from the interval you wanted.
If I recall correctly, the select() function treats the timeout and an I/O parameter and when select returns the time remaining is returned in the timeout variable.
Otherwise, you will have to record the current time before calling, and again after and obtain the difference between the two.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With