I am running Ubuntu Server 12.04 LTS as the guest operating system.
How many volumes can I attach to an instance? I'm working on a project which will require that each of our customer has its own volume.
Amazon seems to not have dynamic volumes, so we need to create a new volume from a snapshot to grow an existing one. This operation requires server down time and that's unacceptable. This is why we need one volume per client. With a physical server, I'll put a 2TB drive and use quotas, but we don't want to go this way for now.
metal instances support a maximum of 16 EBS volumes. High memory virtualized instances support a maximum of 27 EBS volumes.
Disk and tiering limits by EC2 instance Cloud Volumes ONTAP uses EBS volumes as disks, with a maximum disk size of 16 TiB. The sections below show disk and tiering limits by EC2 instance type because many EC2 instance types have different disk limits.
You can attach io1 and io2 EBS volumes to up to 16 Nitro-based instances.
You can attach a volume to an instance using one of the following methods. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/ . In the navigation pane, choose Volumes. Select the volume to attach and choose Actions, Attach volume.
The accepted answer is wrong. There is a limit. I have direct experience right now with EC2 t3.medium, m5a.large, c5.xlarge, running under Amazon Linux, here is what I found:
The Amazon Documentation says indirectly that the limit is (currently) 26 devices:
EBS volumes are exposed as NVMe block devices on Nitro-based instances. The device names are /dev/nvme0n1, /dev/nvme1n1, and so on. The device names that you specify in a block device mapping are renamed using NVMe device names (/dev/nvme[0-26]n1). The block device driver can assign NVMe device names in a different order than you specified for the volumes in the block device mapping.
So, while you can generate tons of device names with /dev/xvd?? that will actually work, and they don't have to be in any order, and you can mix and match all the combination, e.g., /dev/sdf, /dev/xvdz, /dev/xvdxy, there is still a limit of 26 devices.
What happens if you go beyond this limit? Two things:
Because of this behavior, I doubt that the issue is about the OS, Linux, Windows, FreeBSD, whatever. If it was about the OS, the instance would enter "running" state and then get stuck on boot, but wouldn't get stuck in "pending".
Also, you may want to list your /dev/ directory to see for yourself, but you do not have to worry about those nitro device names /dev/nvme* and wonder how they are mapped from the device names that you specified in the attach-volume command; you will find both, i.e, in the above example, you will find the device names /dev/sdf, /dev/xvdz, /dev/xvdxy, as is, but you also find the /dev/nvme* nodes. You can use the device names that you specified during the attach-volume command for things like mkfs, and, I strongly recommend, you then use the UUID=... format to specify the volumes in your /etc/fstab, and never try mounting by /dev/ node name.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With