Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it reasonable to expect that in Linux, fd < maximum number of open file descriptors?

I'm writing a server that needs to handle many open sockets, so I use setrlimit() to set the maximum number of open file descriptors (as root, before dropping privileges) like so:

#include <sys/resource.h>
#define MAX_FD_C 9001

if (setrlimit(
      RLIMIT_NOFILE, &(struct rlimit){.rlim_cur = MAX_FD_C, .rlim_max = MAX_FD_C}
    ) == -1) {
    perror("Failed to set the maximum number of open file descriptors");
    return EXIT_FAILURE;
}

Now, I realize there probably won't be any guarantees and that I'm at the mercy of whatever method the Linux kernel uses to implements file descriptor tables; but in practice, is it reasonable to assume that any fd this program receives from the Linux kernel will have a value less than the MAX_FD_C I set above?

I'd like to keep per socket data as compact as possible which could mean simply using an array like static struct client clients[MAX_FD_C] = {{0}}; and using the fd as the index to the client struct (which would basically be my own version of the FDT).

like image 669
Will Avatar asked Oct 21 '22 09:10

Will


1 Answers

There are functions in the POSIX standard which assume this already. Have a look at FD_SETSIZE, select(), FD_SET.

like image 181
Ben Voigt Avatar answered Oct 24 '22 18:10

Ben Voigt