Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the max number of attached volumes per Amazon EC2 instance?

I am running Ubuntu Server 12.04 LTS as the guest operating system.

How many volumes can I attach to an instance? I'm working on a project which will require that each of our customer has its own volume.

Amazon seems to not have dynamic volumes, so we need to create a new volume from a snapshot to grow an existing one. This operation requires server down time and that's unacceptable. This is why we need one volume per client. With a physical server, I'll put a 2TB drive and use quotas, but we don't want to go this way for now.

like image 595
user1457432 Avatar asked Jun 14 '12 22:06

user1457432


People also ask

How many volumes we can attach to a EC2 instance?

metal instances support a maximum of 16 EBS volumes. High memory virtualized instances support a maximum of 27 EBS volumes.

Is there a maximum size for the volume that has been setup in EC2?

Disk and tiering limits by EC2 instance Cloud Volumes ONTAP uses EBS volumes as disks, with a maximum disk size of 16 TiB. The sections below show disk and tiering limits by EC2 instance type because many EC2 instance types have different disk limits.

How many EBS volumes can attach?

You can attach io1 and io2 EBS volumes to up to 16 Nitro-based instances.

Can you attach volume to AWS EC2 instance?

You can attach a volume to an instance using one of the following methods. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/ . In the navigation pane, choose Volumes. Select the volume to attach and choose Actions, Attach volume.


1 Answers

The accepted answer is wrong. There is a limit. I have direct experience right now with EC2 t3.medium, m5a.large, c5.xlarge, running under Amazon Linux, here is what I found:

  • there seems to be a hard limit of 26 volumes
  • the device names are /dev/sd[a-z], /dev/xvd[a-z], /dev/xvd[a-z][a-z]

The Amazon Documentation says indirectly that the limit is (currently) 26 devices:

EBS volumes are exposed as NVMe block devices on Nitro-based instances. The device names are /dev/nvme0n1, /dev/nvme1n1, and so on. The device names that you specify in a block device mapping are renamed using NVMe device names (/dev/nvme[0-26]n1). The block device driver can assign NVMe device names in a different order than you specified for the volumes in the block device mapping.

So, while you can generate tons of device names with /dev/xvd?? that will actually work, and they don't have to be in any order, and you can mix and match all the combination, e.g., /dev/sdf, /dev/xvdz, /dev/xvdxy, there is still a limit of 26 devices.

What happens if you go beyond this limit? Two things:

  • If the instance is running, the volume you are trying to attach will remain stuck in "attaching" state.
  • If the instance is stopped, the volume attaches without problem, but when you try to start the instance, it will get stuck in "pending" state.

Because of this behavior, I doubt that the issue is about the OS, Linux, Windows, FreeBSD, whatever. If it was about the OS, the instance would enter "running" state and then get stuck on boot, but wouldn't get stuck in "pending".

Also, you may want to list your /dev/ directory to see for yourself, but you do not have to worry about those nitro device names /dev/nvme* and wonder how they are mapped from the device names that you specified in the attach-volume command; you will find both, i.e, in the above example, you will find the device names /dev/sdf, /dev/xvdz, /dev/xvdxy, as is, but you also find the /dev/nvme* nodes. You can use the device names that you specified during the attach-volume command for things like mkfs, and, I strongly recommend, you then use the UUID=... format to specify the volumes in your /etc/fstab, and never try mounting by /dev/ node name.

like image 162
Gunther Schadow Avatar answered Oct 27 '22 00:10

Gunther Schadow