Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

EC2 Can't resize volume after increasing size

I have followed the steps for resizing an EC2 volume

  1. Stopped the instance
  2. Took a snapshot of the current volume
  3. Created a new volume out of the previous snapshot with a bigger size in the same region
  4. Deattached the old volume from the instance
  5. Attached the new volume to the instance at the same mount point

Old volume was 5GB and the one I created is 100GB Now, when i restart the instance and run df -h I still see this

Filesystem            Size  Used Avail Use% Mounted on /dev/xvde1            4.7G  3.5G 1021M  78% / tmpfs                 296M     0  296M   0% /dev/shm 

This is what I get when running

sudo resize2fs /dev/xvde1  The filesystem is already 1247037 blocks long.  Nothing to do! 

If I run cat /proc/partitions I see

 202       64  104857600 xvde  202       65    4988151 xvde1  202       66     249007 xvde2 

From what I understand if I have followed the right steps xvde should have the same data as xvde1 but I don't know how to use it

How can I use the new volume or umount xvde1 and mount xvde instead?

I cannot understand what I am doing wrong

I also tried sudo ifs_growfs /dev/xvde1

xfs_growfs: /dev/xvde1 is not a mounted XFS filesystem 

Btw, this a linux box with centos 6.2 x86_64

Thanks in advance for your help

like image 419
Wilman Arambillete Avatar asked Jun 13 '12 12:06

Wilman Arambillete


People also ask

How do I resize AWS volumes?

First, go to your volume and choose “Modify Volume” under “Actions.” You are then given the option to change both the disk size and the volume type. You can also switch to Provisioned IOPS SSD (io1), increase the size to 100GB, and have the IOPS set to 5000 if your requirements necessitate this step.

How do I increase the size of my EBS volume if I receive an error that there's no space left on my file system?

To avoid No space left on device errors when expanding the root partition or root file system on your EBS volume, use the temporary file system, tmpfs, that resides in memory. Mount the tmpfs file system under the /tmp mount point, and then expand your root partition or root file system.


2 Answers

There's no need to stop instance and detach EBS volume to resize it anymore!

13-Feb-2017 Amazon announced: "Amazon EBS Update – New Elastic Volumes Change Everything"

The process works even if the volume to extend is the root volume of running instance!


Say we want to increase boot drive of Ubuntu from 8G up to 16G "on-the-fly".

step-1) login into AWS web console -> EBS -> right mouse click on the one you wish to resize -> "Modify Volume" -> change "Size" field and click [Modify] button

enter image description here

enter image description here

enter image description here


step-2) ssh into the instance and resize the partition:

let's list block devices attached to our box:
lsblk NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda    202:0    0  16G  0 disk └─xvda1 202:1    0   8G  0 part / 

As you can see /dev/xvda1 is still 8 GiB partition on a 16 GiB device and there are no other partitions on the volume. Let's use "growpart" to resize 8G partition up to 16G:

# install "cloud-guest-utils" if it is not installed already apt install cloud-guest-utils  # resize partition growpart /dev/xvda 1 

Let's check the result (you can see /dev/xvda1 is now 16G):

lsblk NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda    202:0    0  16G  0 disk └─xvda1 202:1    0  16G  0 part / 

Lots of SO answers suggest to use fdisk with delete / recreate partitions, which is nasty, risky, error-prone process especially when we change boot drive.


step-3) resize file system to grow all the way to fully use new partition space
# Check before resizing ("Avail" shows 1.1G): df -h Filesystem      Size  Used Avail Use% Mounted on /dev/xvda1      7.8G  6.3G  1.1G  86% /  # resize filesystem resize2fs /dev/xvda1  # Check after resizing ("Avail" now shows 8.7G!-): df -h Filesystem      Size  Used Avail Use% Mounted on /dev/xvda1       16G  6.3G  8.7G  42% / 

So we have zero downtime and lots of new space to use.
Enjoy!

Update: Update: Use sudo xfs_growfs /dev/xvda1 instead of resize2fs when XFS filesystem.

like image 200
Dmitry Shevkoplyas Avatar answered Sep 28 '22 17:09

Dmitry Shevkoplyas


Thank you Wilman your commands worked correctly, small improvement need to be considered if we are increasing EBSs into larger sizes

  1. Stop the instance
  2. Create a snapshot from the volume
  3. Create a new volume based on the snapshot increasing the size
  4. Check and remember the current's volume mount point (i.e. /dev/sda1)
  5. Detach current volume
  6. Attach the recently created volume to the instance, setting the exact mount point
  7. Restart the instance
  8. Access via SSH to the instance and run fdisk /dev/xvde

    WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u')

  9. Hit p to show current partitions

  10. Hit d to delete current partitions (if there are more than one, you have to delete one at a time) NOTE: Don't worry data is not lost
  11. Hit n to create a new partition
  12. Hit p to set it as primary
  13. Hit 1 to set the first cylinder
  14. Set the desired new space (if empty the whole space is reserved)
  15. Hit a to make it bootable
  16. Hit 1 and w to write changes
  17. Reboot instance OR use partprobe (from the parted package) to tell the kernel about the new partition table
  18. Log via SSH and run resize2fs /dev/xvde1
  19. Finally check the new space running df -h
like image 38
dcf Avatar answered Sep 28 '22 17:09

dcf