I have a problem with a server socket under Linux. For some reason unknown to me the server socket vanishes and I get a Bad file descriptor
error in the select call that waits for an incomming connection. This problem always occurs when I close an unrelated socket connection in a different thread. This happens on an embedded Linux with 2.6.36 Kernel.
Does anyone know why this would happen? Is it normal that a server socket can simply vanish resulting in Bad file descriptor
?
edit:
The other socket code implements a VNC Server and runs in a completely different thread. The only thing special in that other code is the use of setjmp/longjmp
but that should not be a problem.
The code that create the server socket is the following:
int server_socket = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
struct sockaddr_in saddr;
memset(&saddr, 0, sizeof(saddr));
saddr.sin_family = AF_INET;
saddr.sin_addr.s_addr = htonl(INADDR_ANY);
saddr.sin_port = htons(1234);
const int optionval = 1;
setsockopt(server_socket, SOL_SOCKET, SO_REUSEADDR, &optionval, sizeof(optionval));
if (bind(server_socket, (struct sockaddr *) &saddr, sizeof(saddr)) < 0) {
perror("bind");
return 0;
}
if (listen(server_socket, 1) < 0) {
perror("listen");
return 0;
}
I wait for an incomming connection using the code below:
static int WaitForConnection(int server_socket, struct timeval *timeout)
{
fd_set read_fds;
FD_ZERO(&read_fds);
int max_sd = server_socket;
FD_SET(server_socket, &read_fds);
// This select will result in 'EBADFD' in the error case.
// Even though the server socket was not closed with 'close'.
int res = select(max_sd + 1, &read_fds, NULL, NULL, timeout);
if (res > 0) {
struct sockaddr_in caddr;
socklen_t clen = sizeof(caddr);
return accept(server_socket, (struct sockaddr *) &caddr, &clen);
}
return -1;
}
edit: When the problem case happens i currently simply restart the server but I don't understand why the server socket id should suddenly become an invalid file descriptor:
int error = 0;
socklen_t len = sizeof (error);
int retval = getsockopt (server_socket, SOL_SOCKET, SO_ERROR, &error, &len );
if (retval < 0) {
close(server_socket);
goto server_start;
}
Sockets (file descriptors) usually suffer from the same management issues as raw pointers in C
. Whenever you close a socket, do not forget to assign -1
to the variable that keeps the descriptor value:
close(socket);
socket = -1;
As you would do to C
pointer
free(buffer);
buffer = NULL;
If you forget to do this yo can later close socket twice, as you would free()
memory twice if it was a pointer.
The other issue might be related to the fact that people usually forget: file descriptors in UNIX environment start from 0
. If somewhere in the code you have
struct FooData {
int foo;
int socket;
...
}
// Either
FooData my_data_1 = {0};
// Or
FooData my_data_2;
memset(&my_data_2, 0, sizeof(my_data_2));
In both cases my_data_1
and my_data_2
have a valid descriptor (socket
) value. And later, some piece of code, responsible for freeing FooData
structure may blindly close()
this descriptor, that happens to be you server's listening socket (0
).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With