I am working on a school project where I had to write a multi-threaded server, and now I am comparing it to apache by running some tests against it. I am using autobench to help with that, but after I run a few tests, or if I give it too high of a rate (around 600+) to make the connections, I get a "Too many open files" error.
After I am done with dealing with request, I always do a close()
on the socket. I have tried to use the shutdown()
function as well, but nothing seems to help. Any way around this?
There are multiple places where Linux can have limits on the number of file descriptors you are allowed to open. That will give you the system wide limits of file descriptors. This can be changed in /etc/security/limits. conf - it's the nofile param.
"Too many open files " errors happen when a process needs to open more files than it is allowed by the operating system. This number is controlled by the maximum number of file descriptors the process has. 2. Explicitly set the number of file descriptors using the ulimit command.
Make sure you close the TCP connection on the client before closing it on the server. Consider reducing the timeout of TIME_WAIT sockets. In most Linux machines, you can do it by adding the following contents to the sysctl. conf file (f.e.reduce to 30 seconds):
To find out the maximum number of files that one of your processes can open, we can use the ulimit command with the -n (open files) option. And to find the maximum number of processes a user can have we'll use ulimit with the -u (user processes) option.
There are multiple places where Linux can have limits on the number of file descriptors you are allowed to open.
You can check the following:
cat /proc/sys/fs/file-max
That will give you the system wide limits of file descriptors.
On the shell level, this will tell you your personal limit:
ulimit -n
This can be changed in /etc/security/limits.conf - it's the nofile param.
However, if you're closing your sockets correctly, you shouldn't receive this unless you're opening a lot of simulataneous connections. It sounds like something is preventing your sockets from being closed appropriately. I would verify that they are being handled properly.
I had similar problem. Quick solution is :
ulimit -n 4096
explanation is as follows - each server connection is a file descriptor. In CentOS, Redhat and Fedora, probably others, file user limit is 1024 - no idea why. It can be easily seen when you type: ulimit -n
Note this has no much relation to system max files (/proc/sys/fs/file-max).
In my case it was problem with Redis, so I did:
ulimit -n 4096 redis-server -c xxxx
in your case instead of redis, you need to start your server.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With