To limit the maximum amount of memory usage for a container, add the --memory option to the docker run command. Alternatively, you can use the shortcut -m . Within the command, specify how much memory you want to dedicate to that specific container.
The --memory parameter limits the container memory usage, and Docker will kill the container if the container tries to use more than the limited memory.
By default, Docker does not apply memory limitations to individual containers. Containers can consume all available memory of the host.
In the current Docker version, there is a default limitation on the Docker container storage of 10Gb.
free
won't show it as this is enforced via cgroups. Instead on the host (outside the container) you can check using /sysfs
and the cgroup memory:
vagrant@precise64:~$ docker run -m=524288 -d -t busybox sleep 3600
f03a017b174f
vagrant@precise64:~$ cat /sys/fs/cgroup/memory/lxc/f03a017b174ff1022e0f46bc1b307658c2d96ffef1dd97e7c1929a4ca61ab80f//memory.limit_in_bytes
524288
To see it run out of memory, you can run something that will use more memory than you allocate - for example:
docker run -m=524288 -d -p 8000:8000 -t ubuntu:12.10 /usr/bin/python3 -m http.server
8480df1d2d5d
vagrant@precise64:~$ docker ps | grep 0f742445f839
vagrant@precise64:~$ docker ps -a | grep 0f742445f839
0f742445f839 ubuntu:12.10 /usr/bin/python3 -m 16 seconds ago Exit 137 blue_pig
In dmesg
you should see the container and process killed:
[ 583.447974] Pid: 1954, comm: python3 Tainted: GF O 3.8.0-33-generic #48~precise1-Ubuntu
[ 583.447980] Call Trace:
[ 583.447998] [<ffffffff816df13a>] dump_header+0x83/0xbb
[ 583.448108] [<ffffffff816df1c7>] oom_kill_process.part.6+0x55/0x2cf
[ 583.448124] [<ffffffff81067265>] ? has_ns_capability_noaudit+0x15/0x20
[ 583.448137] [<ffffffff81191cc1>] ? mem_cgroup_iter+0x1b1/0x200
[ 583.448150] [<ffffffff8113893d>] oom_kill_process+0x4d/0x50
[ 583.448171] [<ffffffff816e1cf5>] mem_cgroup_out_of_memory+0x1f6/0x241
[ 583.448187] [<ffffffff816e1e7f>] mem_cgroup_handle_oom+0x13f/0x24a
[ 583.448200] [<ffffffff8119000d>] ? mem_cgroup_margin+0xad/0xb0
[ 583.448212] [<ffffffff811949d0>] ? mem_cgroup_charge_common+0xa0/0xa0
[ 583.448224] [<ffffffff81193ff3>] mem_cgroup_do_charge+0x143/0x170
[ 583.448236] [<ffffffff81194125>] __mem_cgroup_try_charge+0x105/0x350
[ 583.448249] [<ffffffff81194987>] mem_cgroup_charge_common+0x57/0xa0
[ 583.448261] [<ffffffff8119517a>] mem_cgroup_newpage_charge+0x2a/0x30
[ 583.448275] [<ffffffff8115b4d3>] do_anonymous_page.isra.35+0xa3/0x2f0
[ 583.448288] [<ffffffff8115f759>] handle_pte_fault+0x209/0x230
[ 583.448301] [<ffffffff81160bb0>] handle_mm_fault+0x2a0/0x3e0
[ 583.448320] [<ffffffff816f844f>] __do_page_fault+0x1af/0x560
[ 583.448341] [<ffffffffa02b0a80>] ? vfsub_read_u+0x30/0x40 [aufs]
[ 583.448358] [<ffffffffa02ba3a7>] ? aufs_read+0x107/0x140 [aufs]
[ 583.448371] [<ffffffff8119bb50>] ? vfs_read+0xb0/0x180
[ 583.448384] [<ffffffff816f880e>] do_page_fault+0xe/0x10
[ 583.448396] [<ffffffff816f4bd8>] page_fault+0x28/0x30
[ 583.448405] Task in /lxc/0f742445f8397ee7928c56bcd5c05ac29dcc6747c6d1c3bdda80d8e688fae949 killed as a result of limit of /lxc/0f742445f8397ee7928c56bcd5c05ac29dcc6747c6d1c3bdda80d8e688fae949
[ 583.448412] memory: usage 416kB, limit 512kB, failcnt 342
I am linking to this nice post on stressing container memory usage. Here's the summary, modified a bit to work for Docker instead of generic LXC:
Launch a container with a memory limit:
$ sudo docker run -m 512M -it ubuntu /bin/bash
root# apt-get update && apt-get install -y build-essential
Create a file, foo.c
, inside the container with the following:
#include <stdlib.h>
#include <stdio.h>
int main(void) {
int i;
for (i=0; i<65536; i++) {
char *q = malloc(65536);
printf ("Malloced: %ld\n", 65536*i);
}
sleep(9999999);
}
Compile the file:
gcc -o foo foo.c
Open a new terminal to monitor the container memory usage:
$ cd /sys/fs/cgroup/memory/lxc/{{containerID}}
$ while true; do echo -n "Mem Usage (mb): " && expr `cat memory.usage_in_bytes` / 1024 / 1024; echo -n "Mem+swap Usage (mb): " && expr `cat memory.limit_in_bytes` / 1024 / 1024; sleep 1; done
Start the memory consumption in the container
$ ./foo
Now watch your container max out. Note: When you're out of memory, malloc's start to fail, but otherwise the container is left alone. Normally the software inside the container will crash due to the failing mallocs, but software that is resilient will continue to operate.
Final note: Docker's -m
flag does not count swap and RAM separately. If you use -m 512M
then some of that 512 will be swap, not RAM. If you want only RAM you will need to use LXC options directly (which means you will need to run Docker with the LXC execution driver instead of libcontainer):
# Same as docker -m 512m
sudo docker run --lxc-conf="lxc.cgroup.memory.limit_in_bytes=512M" -it ubuntu /bin/bash
# Set total to equal maximum RAM (for example, don't use swap)
sudo docker run --lxc-conf="lxc.cgroup.memory.max_usage_in_bytes=512M" --lxc-conf="lxc.cgroup.memory.limit_in_bytes=512M" -it ubuntu /bin/bash
There is a notable difference between using swap as part of the total and not - with swap the foo program above reaching ~450 MB quickly and then slowly consumes the remainder, whereas with only RAM it immediately jumps to 511 MB for me. With swap the container's memory consumption is marked at ~60 MB as soon as I enter the container - this is basically the swap being counted as "usage". Without swap my memory usage is less than 10 MB when I enter the container.
Run the command: docker stats
to see the memory limits you specified applied on containers.
If you are using a newer version of Docker, then the place to look for that information is /sys/fs/cgroup/memory/docker/<container_id>/memory.limit_in_bytes
:
docker run --memory="198m" redis
docker ps --no-trunc` # to get the container long_id
313105b341eed869bcc355c4b3903b2ede2606a8f1b7154e64f913113db8b44a
cat /sys/fs/cgroup/memory/docker/313105b341eed869bcc355c4b3903b2ede2606a8f1b7154e64f913113db8b44a/memory.limit_in_bytes
207618048 # in bytes
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With