I have a websocket service. it's strage that have error:"too many open files", but i have set the system configure:
/etc/security/limits.conf * soft nofile 65000 * hard nofile 65000 /etc/sysctl.conf net.ipv4.ip_local_port_range = 1024 65000 ulimit -n //output 6500
So i think my system configure it's right.
My service is manage by supervisor, it's possible supervisor limits?
check process start by supervisor:
cat /proc/815/limits Max open files 1024 4096 files
check process manual start:
cat /proc/900/limits Max open files 65000 65000 files
The reason is used supervisor manage serivce. if i restart supervisor and restart child process, it's "max open files" ok(65000) but wrong(1024) when reboot system supervisor automatically start.
May be supervisor start level is too high and system configure does not work when supervisor start?
edit:
system: ubuntu 12.04 64bit
It's not supervisor problem, all process auto start after system reboot are not use system configure(max open files=1024), but restart it's ok.
update
Maybe the problem is:
Now the question is, how to set a global nofile limit because i don't want to set nofile limit in every upstart script which i need.
The Too many open files message occurs on UNIX and Linux operating systems. The default setting for the maximum number of open files might be too low. To avoid this condition, increase the maximum open files to 8000 : Edit the /etc/security/limit.
A regular, non-root, user can raise their soft limit to any value up to their hard limit. The root user can increase their hard limit. To see the current soft and hard limits, use ulimit with the -S (soft) and -H (hard) options, and the -n (open files) option.
The "Too many open files" message means that the operating system has reached the maximum "open files" limit and will not allow SecureTransport, or any other running applications to open any more files. The open file limit can be viewed with the ulimit command: The ulimit -aS command displays the current limit.
Fixed this issue by setting the limits for all users in the file :
$ cat /etc/security/limits.d/custom.conf * hard nofile 550000 * soft nofile 550000
REBOOT THE SERVER after setting the limits.
VERY IMPORTANT: The /etc/security/limits.d/
folder contains user specific limits. In my case hadoop 2 (cloudera) related limits. These user specific limits would override the global limits so if your limits are not being applied, be sure to check the user specific limits in the folder /etc/security/limits.d/
and in the file /etc/security/limits.conf
.
CAUTION: Setting user specific limits is the way to go in all cases. Setting the global (*) limit should be avoided. In my case it was an isolated environment and just needed to eliminate file limits issue from my experiment.
Hope this saves someone some hair - as I spent too much time pulling my hair out chunk by chunk!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With