Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Socket accept - "Too many open files"

Tags:

c

sockets

I am working on a school project where I had to write a multi-threaded server, and now I am comparing it to apache by running some tests against it. I am using autobench to help with that, but after I run a few tests, or if I give it too high of a rate (around 600+) to make the connections, I get a "Too many open files" error.

After I am done with dealing with request, I always do a close() on the socket. I have tried to use the shutdown() function as well, but nothing seems to help. Any way around this?

like image 321
Scott Avatar asked May 19 '09 01:05

Scott


People also ask

How do I fix too many open sockets?

There are multiple places where Linux can have limits on the number of file descriptors you are allowed to open. That will give you the system wide limits of file descriptors. This can be changed in /etc/security/limits. conf - it's the nofile param.

What causes too many open files?

"Too many open files " errors happen when a process needs to open more files than it is allowed by the operating system. This number is controlled by the maximum number of file descriptors the process has. 2. Explicitly set the number of file descriptors using the ulimit command.

How do I fix too many open files in Java?

Make sure you close the TCP connection on the client before closing it on the server. Consider reducing the timeout of TIME_WAIT sockets. In most Linux machines, you can do it by adding the following contents to the sysctl. conf file (f.e.reduce to 30 seconds):

How do I know if I have too many open files?

To find out the maximum number of files that one of your processes can open, we can use the ulimit command with the -n (open files) option. And to find the maximum number of processes a user can have we'll use ulimit with the -u (user processes) option.


2 Answers

There are multiple places where Linux can have limits on the number of file descriptors you are allowed to open.

You can check the following:

cat /proc/sys/fs/file-max 

That will give you the system wide limits of file descriptors.

On the shell level, this will tell you your personal limit:

ulimit -n 

This can be changed in /etc/security/limits.conf - it's the nofile param.

However, if you're closing your sockets correctly, you shouldn't receive this unless you're opening a lot of simulataneous connections. It sounds like something is preventing your sockets from being closed appropriately. I would verify that they are being handled properly.

like image 72
Reed Copsey Avatar answered Sep 30 '22 20:09

Reed Copsey


I had similar problem. Quick solution is :

ulimit -n 4096 

explanation is as follows - each server connection is a file descriptor. In CentOS, Redhat and Fedora, probably others, file user limit is 1024 - no idea why. It can be easily seen when you type: ulimit -n

Note this has no much relation to system max files (/proc/sys/fs/file-max).

In my case it was problem with Redis, so I did:

ulimit -n 4096 redis-server -c xxxx 

in your case instead of redis, you need to start your server.

like image 34
Nick Avatar answered Sep 30 '22 21:09

Nick