Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to limit Docker filesystem space available to container(s)

The general scenario is that we have a cluster of servers and we want to set up virtual clusters on top of that using Docker.

For that we have created Dockerfiles for different services (Hadoop, Spark etc.).

Regarding the Hadoop HDFS service however, we have the situation that the disk space available to the docker containers equals to the disk space available to the server. We want to limit the available disk space on a per-container basis so that we can dynamically spawn an additional datanode with some storage size to contribute to the HDFS filesystem.

We had the idea to use loopback files formatted with ext4 and mount these on directories which we use as volumes in docker containers. However, this implies a large performance loss.

I found another question on SO (Limit disk size and bandwidth of a Docker container) but the answers are almost 1,5 years old which - regarding the speed of development of docker - is ancient.

Which way or storage backend would allow us to

  • Limit storage on a per-container basis
  • Has near bare-metal performance
  • Doesn't require repartitioning of the server drives
like image 757
Björn Jacobs Avatar asked Oct 08 '15 11:10

Björn Jacobs


People also ask

How do I limit the memory of a docker container?

By default, a container has no resource constraints and can use as much of a given resource as the host's kernel scheduler allows. Docker provides ways to control how much memory, or CPU a container can use, setting runtime configuration flags of the docker run command.

How do you restrict the memory utilization of a container?

To limit the maximum amount of memory usage for a container, add the --memory option to the docker run command. Alternatively, you can use the shortcut -m . Within the command, specify how much memory you want to dedicate to that specific container.

Do docker containers have a size limit?

In the current Docker version, there is a default limitation on the Docker container storage of 10Gb.


1 Answers

You can specify runtime constraints on memory and CPU, but not disk space.

The ability to set constraints on disk space has been requested (issue 12462, issue 3804), but isn't yet implemented, as it depends on the underlying filesystem driver.

This feature is going to be added at some point, but not right away. It's a bit more difficult to add this functionality right now because a lot of chunks of code are moving from one place to another. After this work is done, it should be much easier to implement this functionality.

Please keep in mind that quota support can't be added as a hack to devicemapper, it has to be implemented for as many storage backends as possible, so it has to be implemented in a way which makes it easy to add quota support for other storage backends.


Update August 2016: as shown below, and in issue 3804 comment, PR 24771 and PR 24807 have been merged since then. docker run now allow to set storage driver options per container

$ docker run -it --storage-opt size=120G fedora /bin/bash 

This (size) will allow to set the container rootfs size to 120G at creation time.
This option is only available for the devicemapper, btrfs, overlay2, windowsfilter and zfs graph drivers

Documentation: docker run/#Set storage driver options per container.

like image 159
VonC Avatar answered Sep 24 '22 02:09

VonC