I'm running npm
inside a docker container and every so often it aborts because it cannot allocate enough memory. I see some flags like --memory
(How do I set resources allocated to a container using docker?) for the docker run
command that seem to limit the maximum amount of memory that a container can consume, but haven't seen anything yet that would allow me to reserve an amount of memory for the container and abort immediately if it cannot be allocated.
By default, Docker does not apply memory limitations to individual containers. Containers can consume all available memory of the host. No need to panic (for most of the users)! If you are using Docker Desktop, the host is actually a virtualized host.
The maximum amount of memory the container can use. If you set this option, the minimum allowed value is 6m (6 megabytes). That is, you must set the value to at least 6 megabytes. The amount of memory this container is allowed to swap to disk.
To limit memory we use the memory flag when starting a container.
This is not how memory management works under Linux.
If you run full virtualization, like QEMU, then all memory can be allocated and passed down into the VM. That VM then boots the kernel and the memory is managed by the kernel in the VM.
In Docker, or any other container/namespace system, the memory is managed by the kernel that runs docker and the "containers". The process that is run in container still runs like a normal process but in a different cgroup
. Each cgroup
has limits, like how much memory the kernel will hand out to userland, or what network interfaces it sees, but it still runs on same kernel.
An analogy of this is that docker is a "glorified ulimit
". Processes under this limit still behave as normal Linux processes
And just like you can't pre-allocate memory for Firefox, you can't pre-allocate memory for a Docker container.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With