I am reading about 6000 text-files into memory with the following code in a loop:
void readDocs(const char *dir, char **array){
DIR *dp = opendir(dir);;
struct dirent *ep;
struct stat st;
static uint count = 0;
if (dp != NULL){
while (ep = readdir(dp)){ // crawl through directory
char name[strlen(dir) + strlen(ep->d_name) + 2];
sprintf(name, "%s/%s", dir, ep->d_name);
if(ep->d_type == DT_REG){ // regular file
stat(name, &st);
array[count] = (char*) malloc(st.st_size);
int f;
if((f = open(name, O_RDONLY)) < 0) perror("open: ");
read(f, array[count], st.st_size));
if(close(f) < 0) perror("close: ");
++count;
}
else if(ep->d_type == DT_DIR && strcmp(ep->d_name, "..") && strcmp(ep->d_name, "."))
// go recursive through sub directories
readDocs(name, array);
}
}
}
In iteration 2826 i get an "Too many open files" error when opening the 2826th file.
No error occured in the close operation until this point.
Since it always hangs in the 2826th iteration i do not believe that i should wait until a file is realy closed after calling close();
I had the same issue using fopen, fread and fclose.
I don't think it has to do with the context of this snippet but if you do i will provide it.
Thanks for your time!
EDIT:
I put the program to sleep and checked /proc//fd/ (thanks to nos). Like you suspected there were exactly 1024 file descriptors which i found to be a usual limit.
+ i gave you the whole function which reads documents out of a directory and all subdirectories
+ the program runs on Linux! Sorry for forgetting that!
The "Too many open files" message means that the operating system has reached the maximum "open files" limit and will not allow SecureTransport, or any other running applications to open any more files. The open file limit can be viewed with the ulimit command: The ulimit -aS command displays the current limit.
The opendir() function shall open a directory stream corresponding to the directory named by the dirname argument. The directory stream is positioned at the first entry. If the type DIR is implemented using a file descriptor, applications shall only be able to open up to a total of {OPEN_MAX} files and directories.
You need to call closedir() after having looped. Opening a directory also consumes a file-descriptor.
You may be hitting the OS limit for # of open files allowed. Not knowing which OS you are using, you should google your OS + "too many open files" to find out how to fix this. Here is one result for linux, http://lj4newbies.blogspot.com/2007/04/too-many-open-files.html
I solved the problem by adding to /etc/security/limits.conf
* soft nofile 40960
* hard nofile 102400
Problem was when login to debian it shows ulimit -n 40960
, but when su user, it's again 1024
.
Need uncomment one row on /etc/pam.d/su
session required pam_limits.so
Then always needed limits
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With