I wasn't able to find straight answer, to this question, but here it is:
Let's say that I have a host which has max open files 1024:
[root@host]# ulimit -a
open files (-n) 1024
and a docker container running in that host with:
[root@container]# ulimit -a
open files (-n) 1048576
So will I have any problem in container if I try to open more than 1024 files? I think the real limit in this case for container will be 1024 files. What do you think?
Docker ulimits limit a program's resource utilization to prevent a run-away bug or security breach from bringing the whole system down. The default limit for Amazon AWS is 1024, which is not enough for Sisense to run properly.
By default, if an out-of-memory (OOM) error occurs, the kernel kills processes in a container. To change this behavior, use the --oom-kill-disable option. Only disable the OOM killer on containers where you have also set the -m/--memory option.
To limit the maximum amount of memory usage for a container, add the --memory option to the docker run command. Alternatively, you can use the shortcut -m . Within the command, specify how much memory you want to dedicate to that specific container.
The real limit will be 1048576.
Have a look at the right part of this image, which shows that containers are basically just isolated processes, running on the same operating system:
As every system call in the container will be handled directly by the host OS, the ulimit that is displayed (1048576) comes directly from the host OS and that is the value that will be used.
The difference in the ulimits could have been caused by a Docker configuration, for example.
(Note that for VMs, this will be different: The guest OS might display a value of 1048576, but the open calls will in the end be handled by the host OS, which will impose the limit of 1024)
Although its a little bit late, I just want to clear the doubts about the difference in ulimit.
If you do net set the value when running the container, the ulimit value displayed within the container comes from the host OS. The question is then why are you seeing a different value when running the same command from the host?
This is because when running the command in the host, it is showing its soft limit. On the other hand, the value that the container is showing is the hard limit of the host OS. The reason for this is you are allowed to cross the soft limit. So in a sense, hard limit is actually the real limit. You can find more about ulimit in this link.
To see the hard limit just type the following command
ulimit -Hn
You will see that the values match.
N.B. You can not cross the hard limit but you can increase it if you are the root.
Limits on open files can be set in the Docker configuration using LimitNOFILE
, or they can be passed to the docker run
command:
$ docker run -it --ulimit nofile=1024:2048 ubuntu bash -c 'ulimit -Hn && ulimit -Sn' 2048 1024
See here for elaboration on setting ulimits in Docker.
Note that this limit can be set higher than the OS's hard limit, which can cause trouble.
I needed a direct answer to the OP's question:
So will I have a problem in container if I try to open more then 1024 files?
No. In this case I've found that the host value has no effect inside of the container and the Docker container can specify its own ulimit values.
In other words, it is valid for your container to set a higher value than the host's default value. So the effective ulimit nofile value in the container is 1048576 and this will work fine.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With