Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

"no space left on device" even after removing all containers

While experimenting with Docker and Docker Compose I suddenly ran into "no space left on device" errors. I've tried to remove everything using methods suggested in similar questions, but to no avail.

Things I ran:

$ docker-compose rm -v

$ docker volume rm $(docker volume ls -qf dangling=true)

$ docker rmi $(docker images | grep "^<none>" | awk "{print $3}")

$ docker system prune

$ docker container prune

$ docker rm $(docker stop -t=1 $(docker ps -q))

$ docker rmi -f $(docker images -q)

As far as I'm aware there really shouldn't be anything left now. And it looks that way:

$ docker images    
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

Same for volumes:

$ docker volume ls
DRIVER              VOLUME NAME

And for containers:

$ docker container ls   
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Unfortunately, I still get errors like this one:

$ docker-compose up
Pulling adminer (adminer:latest)...
latest: Pulling from library/adminer
90f4dba627d6: Pulling fs layer
19ae35d04742: Pulling fs layer
6d34c9ec1436: Download complete
729ea35b870d: Waiting
bb4802913059: Waiting
51f40f34172f: Waiting
8c152ed10b66: Waiting
8578cddcaa07: Waiting
e68a921e4706: Waiting
c88c5cb37765: Waiting
7e3078f18512: Waiting
42c465c756f0: Waiting
0236c7f70fcb: Waiting
6c063322fbb8: Waiting
ERROR: open /var/lib/docker/tmp/GetImageBlob865563210: no space left on device

Some data about my Docker installation:

$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 1
Server Version: 17.06.1-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 15
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Security Options:
apparmor
seccomp
  Profile: default
Kernel Version: 4.10.0-32-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.685GiB
Name: engelbert
ID: UO4E:FFNC:2V25:PNAA:S23T:7WBT:XLY7:O3KU:VBNV:WBSB:G4RS:SNBH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

And my disk info:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3,9G     0  3,9G   0% /dev
tmpfs           787M   10M  778M   2% /run
/dev/nvme0n1p3   33G   25G  6,3G  80% /
tmpfs           3,9G   46M  3,8G   2% /dev/shm
tmpfs           5,0M  4,0K  5,0M   1% /run/lock
tmpfs           3,9G     0  3,9G   0% /sys/fs/cgroup
/dev/loop0       81M   81M     0 100% /snap/core/2462
/dev/loop1       80M   80M     0 100% /snap/core/2312
/dev/nvme0n1p1  596M   51M  546M   9% /boot/efi
/dev/nvme0n1p5  184G   52G  123G  30% /home
tmpfs           787M   12K  787M   1% /run/user/121
tmpfs           787M   24K  787M   1% /run/user/1000

And:

$ df -hi /var/lib/docker
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/nvme0n1p3   2,1M  2,0M   68K   97% /

As said, I'm still experimenting, so I'm not sure if I've posted all relevant info - let me know if you need more.

Anyone any idea what more could be the issue?

like image 871
Vincent Avatar asked Aug 22 '17 08:08

Vincent


People also ask

How do you free up space if a docker server is running out of space?

Prune volumes Volumes can be used by one or more containers, and take up space on the Docker host. Volumes are never removed automatically, because to do so could destroy data. By default, you are prompted to continue. To bypass the prompt, use the -f or --force flag.

How do I get rid of dangling docker images?

If we do not want to find dangling images and remove them one by one, we can use the docker image prune command. This command removes all dangling images. If we also want to remove unused images, we can use the -a flag. The command will return the list of image IDs that were removed and the space that was freed.

How do I force remove all containers?

Use the docker container prune command to remove all stopped containers, or refer to the docker system prune command to remove unused containers in addition to other Docker resources, such as (unused) images and networks.


2 Answers

The problem is that /var/lib/docker is on the / filesystem, which is running out of inodes. You can check this by running df -i /var/lib/docker

Since /home's filesystem has sufficient inodes and disk space, moving Docker's working directory there there should get it going again.

(Note that the this assumes there is nothing valuable in the current Docker install.)

First stop the Docker daemon. On Ubuntu, run

sudo service docker stop

Then move the old /var/lib/docker out of the way:

sudo mv /var/lib/docker /var/lib/docker~

Now create a directory on /home:

sudo mkdir /home/docker

and set the required permissions:

sudo chmod 0711 /home/docker

Link the /var/lib/docker directory to the new working directory:

sudo ln -s /home/docker /var/lib/docker

Then restart the Docker daemon:

sudo service docker start

Then it should work again.

like image 57
Rob Blake Avatar answered Oct 18 '22 02:10

Rob Blake


For future reference, if you have removed all the containers you can also try docker system prune which will remove dangling images, containers and anything else.

like image 33
Tarang Avatar answered Oct 18 '22 04:10

Tarang