Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Unable to growpart because no space left

I'm running an AWS EC2 Ubuntu instance with EBS storage initially of 8GB.

This is now 99.8% full, so I've followed AWS documentation instructions to increase the EBS volume to 16GB. I now need to extend my partition /dev/xvda1 to 16GB, but when I run the command

$ growpart /dev/xvda 1 

I get the error

mkdir: cannot create directory ‘/tmp/growpart.2626’: No space left on device 

I have tried

  1. rebooting the instance
  2. stopping the instance, and mounting a newly created EBS volume of size 16GB based on a snapshot of the old 8GB volume
  3. running docker system prune -a (resulting in a "Cannot connect to the Docker daemon at unix:/var/run/docker.sock. Is the docker daemon running?" error. When I try to start the daemon using sudo dockerd, I get a "no space left on device" error as well)
  4. running resize2fs /dev/xvda1

all to no avail.

Running lsblk returns

NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT loop0     7:0    0   89M  1 loop /snap/core/7713 loop1     7:1    0   18M  1 loop /snap/amazon-ssm-agent/1480 loop2     7:2    0 89.1M  1 loop /snap/core/7917 loop3     7:3    0   18M  1 loop /snap/amazon-ssm-agent/1455 xvda    202:0    0   16G  0 disk └─xvda1 202:1    0    8G  0 part / 

df -h returns

Filesystem      Size  Used Avail Use% Mounted on udev            2.0G     0  2.0G   0% /dev tmpfs           395M   16M  379M   4% /run /dev/xvda1      7.7G  7.7G     0 100% / tmpfs           2.0G     0  2.0G   0% /dev/shm tmpfs           5.0M     0  5.0M   0% /run/lock tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup /dev/loop0       90M   90M     0 100% /snap/core/7713 /dev/loop1       18M   18M     0 100% /snap/amazon-ssm-agent/1480 /dev/loop2       90M   90M     0 100% /snap/core/7917 /dev/loop3       18M   18M     0 100% /snap/amazon-ssm-agent/1455 tmpfs           395M     0  395M   0% /run/user/1000 

and df -i returns

Filesystem      Inodes  IUsed  IFree IUse% Mounted on udev            501743    296 501447    1% /dev tmpfs           504775    457 504318    1% /run /dev/xvda1     1024000 421259 602741   42% / tmpfs           504775      1 504774    1% /dev/shm tmpfs           504775      3 504772    1% /run/lock tmpfs           504775     18 504757    1% /sys/fs/cgroup /dev/loop0       12827  12827      0  100% /snap/core/7713 /dev/loop1          15     15      0  100% /snap/amazon-ssm-agent/1480 /dev/loop2       12829  12829      0  100% /snap/core/7917 /dev/loop3          15     15      0  100% /snap/amazon-ssm-agent/1455 tmpfs           504775     10 504765    1% /run/user/1000 
like image 880
llamarama Avatar asked Dec 20 '19 04:12

llamarama


People also ask

How do I increase the size of my EBS volume if I receive an error that there's no space left on my file system?

To avoid No space left on device errors when expanding the root partition or root file system on your EBS volume, use the temporary file system, tmpfs, that resides in memory. Mount the tmpfs file system under the /tmp mount point, and then expand your root partition or root file system.


2 Answers

For anyone that has this problem, here's a link to the answer: https://aws.amazon.com/premiumsupport/knowledge-center/ebs-volume-size-increase/

Summary

  1. Run df -h to verify your root partition is full (100%)
  2. Run lsblk and then lsblk -f to get block device details
  3. sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp
  4. sudo growpart /dev/DEVICE_ID PARTITION_NUMBER
  5. lsblk to verify partition has expanded
  6. sudo resize2fs /dev/DEVICE_IDPARTITION_NUMBER
  7. Run df -h to verify your resized disk
  8. sudo umount /tmp
like image 167
DeliciousElephant8 Avatar answered Oct 17 '22 04:10

DeliciousElephant8


I came across this article http://www.daniloaz.com/en/partitioning-and-resizing-the-ebs-root-volume-of-an-aws-ec2-instance/ and solved it with ideas from there.

Steps taken:

  1. Note down root device (e.g. /dev/sda1)
  2. Stop instance
  3. Detach root EBS volume and then modify volume size if you haven't already
  4. Create an auxiliary instance (e.g. a t2.micro instance, or use an existing one if you wish)
  5. Attach the volume from step 2 to the auxiliary instance (doesn't matter which device)
  6. In the auxiliary instance, run lsblk to ensure the volume has been mounted correctly
  7. sudo growpart /dev/xvdf 1 (or similar, to expand the partition)
  8. lsblk to check that the partition has grown
  9. Detach the volume
  10. Attach the volume to your original instance, with device set to the one you noted down in Step 1
  11. Start the instance and then SSH into it
  12. If you still get the message "Usage of /: 99.8% of X.XX GB", run df -h to check the size of your root volume partition (e.g. /dev/xvda1)
  13. Run sudo resize2fs /dev/xvda1 (or similar) to resize your partition
  14. Run df -h to check that your Use% of /dev/xvda1 is no longer ~100%
like image 35
llamarama Avatar answered Oct 17 '22 03:10

llamarama