Basically, the title says it all: Is there any limit in the number of containers running at the same time on a single Docker host?
Runs Eight Containers per Host.
Using this simple calculation, we can estimate that we can run about 1,000 containers on a single host with 10GB of available disk space.
You can run both containers using different host ports, and use a haproxy/nginx/varnish (native or inside another container) listening to the host port, and redirecting to the right container based on the URL. Show activity on this post. This is as much a question about the way tcp ports work as the way docker works.
The overall limits will normally be decided by what you run inside the containers rather than dockers overhead (unless you are doing something esoteric, like testing how many containers you can run :) If you are running apps in a virtual machine (node,ruby,python,java) memory usage is likely to become your main issue.
There are a number of system limits you can run into (and work around) but there's a significant amount of grey area depending on
The figures below are from the boot2docker 1.11.1 vm image which is based on Tiny Core Linux 7. The kernel is 4.4.8
Docker creates or uses a number of resources to run a container, on top of what you run inside the container.
docker0
bridge (1023 max per bridge)shm
file system (1048576 mounts max per fs type)docker-containerd-shim
management process (~3MB per container on avg and sysctl kernel.pid_max
)cgroup
s and name spacesulimit -n
and sysctl fs.file-max
)-p
will run a extra process per port number on the host (~4.5MB per port on avg pre 1.12, ~300k per port > 1.12 and also sysctl kernel.pid_max
)--net=none
and --net=host
would remove the networking overheads. The overall limits will normally be decided by what you run inside the containers rather than dockers overhead (unless you are doing something esoteric, like testing how many containers you can run :)
If you are running apps in a virtual machine (node,ruby,python,java) memory usage is likely to become your main issue.
IO across a 1000 processes would cause a lot of IO contention.
1000 processes trying to run at the same time would cause a lot of context switching (see vm apps above for garbage collection)
If you create network connections from a 1000 containers the hosts network layer will get a workout.
It's not much different to tuning a linux host to run a 1000 processes, just some additional Docker overheads to include.
1023 Docker busybox images running nc -l -p 80 -e echo host
uses up about 1GB of kernel memory and 3.5GB of system memory.
1023 plain nc -l -p 80 -e echo host
processes running on a host uses about 75MB of kernel memory and 125MB of system memory
Starting 1023 containers serially took ~8 minutes.
Killing 1023 containers serially took ~6 minutes
From a post on the mailing list, at about 1000 containers you start running into Linux networking issues.
The reason is:
This is the kernel, specifically net/bridge/br_private.h BR_PORT_BITS cannot be extended because of spanning tree requirements.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With