Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Limit disk size and bandwidth of a Docker container

Tags:

docker

I have a physical host machine with Ubuntu 14.04 running on it. It has 100G disk and 100M network bandwidth. I installed Docker and launched 10 containers. I would like to limit each container to a maximum of 10G disk and 10M network bandwidth.

After going though the official documents and searching on the Internet, I still can't find a way to allocate specified size disk and network bandwidth to a container.

I think this may not be possible in Docker directly, maybe we need to bypass Docker. Does this means we should use something "underlying", such as LXC or Cgroup? Can anyone give some suggestions?


Edit:

@Mbarthelemy, your suggestion seems to work but I still have some questions about disk:

1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.

2) I use the command below to start the Docker daemon and container:

docker -d -s devicemapper docker run -i -t training/webapp /bin/bash 

then I use df -h to view the disk usage, it gives the following output:

Filesystem                  Size  Used Avail Use% Mounted on /dev/mapper/docker-longid   9.8G  276M  9.0G   3% / /dev/mapper/Chris--vg-root   27G  5.5G   20G  22% /etc/hosts 

from the above I think the maximum disk a container can use is still larger than 10G, what do you think ?

like image 385
Chris.Huang Avatar asked Jun 24 '14 16:06

Chris.Huang


People also ask

Do Docker containers have a size limit?

In the current Docker version, there is a default limitation on the Docker container storage of 10Gb.

How do I limit the memory of a Docker container?

To limit the maximum amount of memory usage for a container, add the --memory option to the docker run command. Alternatively, you can use the shortcut -m . Within the command, specify how much memory you want to dedicate to that specific container.


2 Answers

I don't think this is possible right now using Docker default settings. Here's what I would try.

  • About disk usage: You could tell Docker to use the DeviceMapper storage backend instead of AuFS. This way each container would run on a block device (Devicemapper dm-thin target) limited to 10GB (this is a Docker default, luckily enough it matches your requirement!).

    According to this link, it looks like latest versions of Docker now accept advanced storage backend options. Using the devicemapperbackend, you can now change the default container rootfs size option using --storage-opt dm.basesize=20G (that would be applied to any newly created container).

    To change the storage backend: use the --storage-driver=devicemapper Docker option. Note that your previous containers won't be seen by Docker anymore after the change.

  • About network bandwidth : you could tell Docker to use LXC under the hoods : use the -e lxcoption.

    Then, create your containers with a custom LXC directive to put them into a traffic class :

    docker run --lxc-conf="lxc.cgroup.net_cls.classid = 0x00100001" your/image /bin/stuff

    Check the official documentation about how to apply bandwidth limits to this class. I've never tried this myself (my setup uses a custom OpenVswitch bridge and VLANs for networking, so bandwidth limitation is different and somewhat easier), but I think you'll have to create and configure a different class.

Note : the --storage-driver=devicemapperand -e lxcoptions are for the Docker daemon, not for the Docker client you're using when running docker run ........

like image 58
mbarthelemy Avatar answered Sep 21 '22 01:09

mbarthelemy


New releases version has --device-read-bps and --device-write-bps.

You can use:

docker run --device-read-bps=/dev/sda:10mb 

More info here:

https://blog.docker.com/2016/02/docker-1-10/

like image 34
moylop260 Avatar answered Sep 21 '22 01:09

moylop260