I have followed the steps for resizing an EC2 volume
Old volume was 5GB and the one I created is 100GB Now, when i restart the instance and run df -h I
still see this
Filesystem Size Used Avail Use% Mounted on /dev/xvde1 4.7G 3.5G 1021M 78% / tmpfs 296M 0 296M 0% /dev/shm
This is what I get when running
sudo resize2fs /dev/xvde1 The filesystem is already 1247037 blocks long. Nothing to do!
If I run cat /proc/partitions
I see
202 64 104857600 xvde 202 65 4988151 xvde1 202 66 249007 xvde2
From what I understand if I have followed the right steps xvde should have the same data as xvde1 but I don't know how to use it
How can I use the new volume or umount xvde1 and mount xvde instead?
I cannot understand what I am doing wrong
I also tried sudo ifs_growfs /dev/xvde1
xfs_growfs: /dev/xvde1 is not a mounted XFS filesystem
Btw, this a linux box with centos 6.2 x86_64
Thanks in advance for your help
First, go to your volume and choose “Modify Volume” under “Actions.” You are then given the option to change both the disk size and the volume type. You can also switch to Provisioned IOPS SSD (io1), increase the size to 100GB, and have the IOPS set to 5000 if your requirements necessitate this step.
To avoid No space left on device errors when expanding the root partition or root file system on your EBS volume, use the temporary file system, tmpfs, that resides in memory. Mount the tmpfs file system under the /tmp mount point, and then expand your root partition or root file system.
There's no need to stop instance and detach EBS volume to resize it anymore!
13-Feb-2017 Amazon announced: "Amazon EBS Update – New Elastic Volumes Change Everything"
The process works even if the volume to extend is the root volume of running instance!
lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 16G 0 disk └─xvda1 202:1 0 8G 0 part /
As you can see /dev/xvda1 is still 8 GiB partition on a 16 GiB device and there are no other partitions on the volume. Let's use "growpart" to resize 8G partition up to 16G:
# install "cloud-guest-utils" if it is not installed already apt install cloud-guest-utils # resize partition growpart /dev/xvda 1
Let's check the result (you can see /dev/xvda1 is now 16G):
lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 16G 0 disk └─xvda1 202:1 0 16G 0 part /
Lots of SO answers suggest to use fdisk with delete / recreate partitions, which is nasty, risky, error-prone process especially when we change boot drive.
# Check before resizing ("Avail" shows 1.1G): df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.8G 6.3G 1.1G 86% / # resize filesystem resize2fs /dev/xvda1 # Check after resizing ("Avail" now shows 8.7G!-): df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 16G 6.3G 8.7G 42% /
So we have zero downtime and lots of new space to use.
Enjoy!
Update: Update: Use sudo xfs_growfs /dev/xvda1 instead of resize2fs when XFS filesystem.
Thank you Wilman your commands worked correctly, small improvement need to be considered if we are increasing EBSs into larger sizes
/dev/sda1
)Access via SSH to the instance and run fdisk /dev/xvde
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u')
Hit p to show current partitions
partprobe
(from the parted
package) to tell the kernel about the new partition tableIf you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With