OPEN_MAX
is the constant that defines the maximum number of open files allowed for a single program.
According to Beginning Linux Programming 4th Edition, Page 101 :
The limit, usually defined by the constant OPEN_MAX in limits.h, varies from system to system, ...
In my system, the file limits.h
in directory /usr/lib/gcc/x86_64-linux-gnu/4.6/include-fixed
does not have this constant. Am i looking at the wrong limits.h
or has the location of OPEN_MAX
changed since 2008 ?
For what it's worth, the 4th edition of Beginning Linux Programming was published in 2007; parts of it may be a bit out of date. (That's not a criticism of the book, which I haven't read.)
It appears that OPEN_MAX
is deprecated, at least on Linux systems. The reason appears to be that the maximum number of file that can be opened simultaneously is not fixed, so a macro that expands to an integer literal is not a good way to get that information.
There's another macro FOPEN_MAX
that should be similar; I can't think of a reason why OPEN_MAX
and FOPEN_MAX
, if they're both defined, should have different values. But FOPEN_MAX
is mandated by the C language standard, so system's don't have the option of not defining it. The C standard says that FOPEN_MAX
expands to an integer constant expression that is the minimum number of files that the implementation guarantees can be open simultaneously
(If the word "minimum" is confusing, it's a guarantee that a program can open at least that many files at once.)
If you want the current maximum number of files that can be opened, take a look at the sysconf()
function; on my system, sysconf(_SC_OPEN_MAX)
returns 1024. (The sysconf()
man page refers to a symbol OPEN_MAX
. This is not a count, but a value recognized by sysconf()
. And it's not defined on my system.)
I've searched for OPEN_MAX
(word match, so excluding FOPEN_MAX
) on my Ubuntu system, and found the following (these are obviously just brief excerpts):
/usr/include/X11/Xos.h
:
# ifdef __GNU__
# define PATH_MAX 4096
# define MAXPATHLEN 4096
# define OPEN_MAX 256 /* We define a reasonable limit. */
# endif
/usr/include/i386-linux-gnu/bits/local_lim.h
:
/* The kernel header pollutes the namespace with the NR_OPEN symbol
and defines LINK_MAX although filesystems have different maxima. A
similar thing is true for OPEN_MAX: the limit can be changed at
runtime and therefore the macro must not be defined. Remove this
after including the header if necessary. */
#ifndef NR_OPEN
# define __undef_NR_OPEN
#endif
#ifndef LINK_MAX
# define __undef_LINK_MAX
#endif
#ifndef OPEN_MAX
# define __undef_OPEN_MAX
#endif
#ifndef ARG_MAX
# define __undef_ARG_MAX
#endif
/usr/include/i386-linux-gnu/bits/xopen_lim.h
:
/* We do not provide fixed values for
ARG_MAX Maximum length of argument to the `exec' function
including environment data.
ATEXIT_MAX Maximum number of functions that may be registered
with `atexit'.
CHILD_MAX Maximum number of simultaneous processes per real
user ID.
OPEN_MAX Maximum number of files that one process can have open
at anyone time.
PAGESIZE
PAGE_SIZE Size of bytes of a page.
PASS_MAX Maximum number of significant bytes in a password.
We only provide a fixed limit for
IOV_MAX Maximum number of `iovec' structures that one process has
available for use with `readv' or writev'.
if this is indeed fixed by the underlying system.
*/
Aside from the link given by cste, I would like to point out that there is a /proc/sys/fs/file-max
entry that provides the number of files THE SYSTEM can have open at any given time.
Here's some docs: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Directory_Server/8.2/html/Performance_Tuning_Guide/system-tuning.html
Note that this is not to say that there's a GUARANTEE you can open that many files - if the system runs out of some resource (e.g. "no more memory available"), then it may well fail.
The FOPEN_MAX indicates that the C library allows this many files to be opened (at least, as discussed), but there are other limits that may happen first. Say for example the SYSTEM limit is 4000 files, and some applications already running has 3990 files open. Then you won't be able to open more than 7 files [since stdin, stdout and stderr take up three slots too]. And if rlimit
is set to 5, then you can only open 2 files of your own.
In my opinion, the best way to know if you can open a file is to open it. If that fails, you have to do something else. If you have some process that needs to open MANY files [e.g. a multithreaded search/compare on a machine with 256 cores and 8 threads per core and each thread uses three files (file "A", "B" and "diff") ], then you may need to ensure that your FOPEN_MAX allows for 3 * 8 * 256 files being opened before you start creating threads, as a thread that fails to open its files will be meaningless. But for most ordinary applications, just try to open the file, if it fails, tell the user (log, or something), and/or try again...
I suggest to use the magic of grep
to find this constant on /usr/include
:
grep -rn --col OPEN_MAX /usr/include
...
...
/usr/include/stdio.h:159: FOPEN_MAX Minimum number of files that can be open at once.
...
...
Hope it helps you
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With