Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Degrading Performance of AWS EFS

We have hosted our wordpress site on aws ec2 with autoscaling and EFS.But all of a sudden the PermittedThroughput became near Zero bytes and BurstCreditBalance was becoming less day by day(from 2TB to few Mbs!). EFS size was only around 2GB!. We are facing this issue second time. I would like to know is there anyone who has similiar experience or any suggestion on this situation.Planning to move from EFS to NFS or glusterfs on comming days.

cloudwatch graphp

enter image description here

like image 679
jobycxa Avatar asked Jan 16 '17 09:01

jobycxa


People also ask

Why is EFS slow?

If the file size for each file is small, the throughput to send that file is small. You might also notice latency when sending files. The distributed nature of EFS means that it must replicate to all mount points, so there is overhead per file operation. Therefore, latency in sending files is expected behavior.

What are some limitations of EFS?

Amazon EFS file system policies have a 20,000 character limit. In General Purpose mode, there is a limit of 35,000 file operations per second. Operations that read data or metadata consume one file operation, operations that write data or update metadata consume five file operations.

Does EFS scale down?

It scales automatically, even to meet the most abrupt workload spikes. After the period of high-volume storage demand has passed, EFS will automatically scale back down. EFS can be mounted to different AWS services and accessed from all your virtual machines. Use it for running shared volumes, or for big data analysis.


1 Answers

Throughput on Amazon EFS scales as a file system grows.

...

The bursting capability (both in terms of length of time and burst rate) of a file system is directly related to its size. Larger file systems can burst at larger rates for longer periods of time. Therefore, if your application needs to burst more (that is, if you find that your file system is running out of burst credits), you should increase the size of your file system.

Note

There’s no provisioning with Amazon EFS, so to make your file system larger you need to add more data to it.

http://docs.aws.amazon.com/efs/latest/ug/performance.html

You mentioned that your filesystem is only storing 2 GiB of data. That's the problem: it's counterintuitive at first glance, but EFS actually gets faster as it gets larger... and the opposite is also true. Small filesystems accumulate burst credits only at the rate of 50 KiB/sec per second per GiB of data stored.

So, for a 2 GiB filesystem, you're going to deplete your credits by transferring a very small amount of data daily:

60 sec/minute ×
60 min/hour ×
24 hr/day ×
0.05 MiB/s per GiB stored ×
2 GiB stored = 8,640 MiB/day

So about 8.6 GiB per day is all the data transfer this filesystem can sustain.

This seems odd until you remember that you're only paying $0.60 per month.

You can boost the performance linearly by simply storing more data. The filesystem size that is used for the calculation is updated once per hour, so if you go this route, within a couple of hours you should see an uptick.

The reason it's worked well until now is that each new filesystem comes with an initial credit balance equivalent to 2.1 TiB. This is primarily intended to allow the filesystem to be fast as you're initially loading data onto it, but in a low total storage environment such as the one you describe, it will last for days or weeks and then suddenly (apparently) you finally see the system settle down to its correct baseline behavior.

Essentially, you are paying for the settings of two interconnected parameters -- total storage capacity and baseline throughput -- neither of which is something you configure. If you want more storage, just store more files... and if you want more throughput, just... store more files.

like image 109
Michael - sqlbot Avatar answered Sep 29 '22 14:09

Michael - sqlbot