I'm writing a program using epoll_wait to wait for file descriptors on 64-bit Linux, and I try to put some information together with the file descriptor in the epoll_event user data.
I know in practice it's unlikely that file descriptor will exceed 32 bits. Just wanna know is that kernel guarantees that file descriptors have a specific range, or it just counts from small and unlikely to get very big?
Linux systems limit the number of file descriptors that any one process may open to 1024 per process. (This condition is not a problem on Solaris machines, x86, x64, or SPARC). After the directory server has exceeded the file descriptor limit of 1024 per process, any new process and worker threads will be blocked.
A file descriptor is a number that uniquely identifies an open file in a computer's operating system. It describes a data resource, and how that resource may be accessed. When a program asks to open a file — or another data resource, like a network socket — the kernel: Grants access.
A Too many open files error happens when a process needs to open more files than the operating system allows. This number is controlled by the maximum number of file descriptors the process has. If you experience such an issue, you can increase the operating system file descriptor limit.
The epoll_ctl(2)
interface to add new filedescriptors takes an int fd
argument, so you're already limited to 32-bit range (at least on the Linux platforms I'm familiar with).
You're further limited by /proc/sys/fs/file-max
system-wide limit on the number of open files for all processes; /proc/sys/fs/file-max
is currently 595956
on my system.
Each process is further limited via the setrlimit(2)
RLIMIT_NOFILE
per-process limit on the number of open files. 1024 is a common RLIMIT_NOFILE
limit. (It's very easy to change this limit via /etc/security/limits.conf
.)
It's a rare application that needs more than 1024. The full 32 bits seems unlikely as well, since each open file will take some kernel memory to represent -- four billion ~280 byte struct inode
structures (at the minimum) is a lot of pinned memory.
Do you plan on having 2 billion file descriptors open, and do you expect the OS to handle this?
In most *nixs, functions that return a FD return it as an int
, with < 0 being an invalid descriptor. Those functions return FDs in an int
, so that type's range is the range for FDs. (Minus the negatives (no pun intended)) I'd follow suit: use the same type, thus, int
.
I found a comment in the kernel indicating the hard upper limit is 1024*1024.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With