I'm using AWS to run some data processing. I have 400 spot instances in EC2 with 4 processes each, all of them writing to a single bucket in S3. I've started to get a (apparently uncommon) error saying:
503: Slow Down
Does anyone know what the actual request limit is for an S3 bucket? I cannot find any AWS documentation on it.
Thank you!
Amazon S3 now provides increased performance to support at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which can save significant processing time for no additional charge.
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 TB. The largest object that can be uploaded in a single PUT is 5 GB.
When you first start using Amazon S3 as a new customer, you can take advantage of a free usage tier. This gives you 5GB of S3 storage in the Standard Storage class, 2,000 PUT requests, 20,000 GET requests, and 15 GB of data transfer out of your storage “bucket” each month free for one year.
Bucket policies are limited to 20 KB in size. You can use the AWS Policy Generator to create a bucket policy for your Amazon S3 bucket. You can then use the generated document to set your bucket policy by using the Amazon S3 console , through several third-party tools, or via your application.
AWS documents 503 as a result of temporary error. It does not reflect a specific limit.
According to "Best Practices for Using Amazon S3" section on handling errors (http://aws.amazon.com/articles/1904/):
500-series errors indicate that a request didn't succeed, but may be retried. Though infrequent, these errors are to be expected as part of normal interaction with the service and should be explicitly handled with an exponential backoff algorithm (ideally one that utilizes jitter). One such algorithm can be found at http://en.wikipedia.org/wiki/Truncated_binary_exponential_backoff.
Particularly if you suddenly begin executing hundreds of PUTs per second into a single bucket, you may find that some requests return a 503 "Slow Down" error while the service works to repartition the load. As with all 500 series errors, these should be handled with exponential backoff.
While less detailed, the S3 Error responses documentation does include 503 Slow Down (http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html).
From what I've read, Slow Down is a very infrequent error. However, after posting this question I received an email from AWS that said the had capped my LIST requests to 10 requests per second because I had too many going to a specific bucket.
I had been using a custom queuing script for the project I am working on, which relied on LIST requests to determine the next item to process. After running into this problem I switched to AWS SQS, which was a lot simpler to implement than I'd thought it would be. No more custom queue, no more massive amount of LIST requests.
Thanks for the answers!
To add to what James said, there are some internals about S3 partitioning that have been discussed and can be used to mitigate this in the future because exponential backoff is required.
See here: http://aws.typepad.com/aws/2012/03/amazon-s3-performance-tips-tricks-seattle-hiring-event.html
Briefly, don't store everything with the same prefix or there is a higher likelihood you will have these errors.Find some way to make the very first character in the prefix be as random as possible to avoid hotspots in S3's internal partitioning.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With