Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Amazon S3: maximum PUT requests per second

I'm putting a large number of small items to S3 using the REST API. The average payload is ~10 bytes.

The items are all going into one bucket, and have randomized names (i.e. there is no lexicographical order)

From EC2, I've managed a rate of 4-500 per second. I'm using a thread pool of 96 threads, with 64 TCP connections.

I occasionally get an HTTP 500, but have not yet received 503 - meant to indicate that the client slows the rate of requests.

Does anyone know what I can realistically attain? I know the pipe between EC2 and S3 can manage a throughput of 20 MB/s, so I'm hoping to do a bit better.

like image 976
user756079 Avatar asked Feb 24 '12 05:02

user756079


People also ask

What is the maximum S3 puts per second rate supported in AWS?

Resolution. Amazon S3 supports a request rate of 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket.

How many requests per second does Amazon get?

Amazon ships approximately 1.6 million packages a day. That works out to more than 66 thousand orders per hour, and 18.5 orders per second.

How many get and put request are freely available in S3?

The AWS Free Tier, offered to new AWS customers, gives you 5 GB of storage in the AWS S3 Standard storage tier. This includes up to 2000 PUT, POST, COPY or LIST requests, 20,000 GET requests and 15 GB of outgoing data transfer per month for a year.

Is there any limit for S3 bucket?

The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 TB.


1 Answers

There should be no surprise that you are seeing poor performance using REST for transfer for such tiny payloads.

The way to do better is to restructure the nature of your protocol or storage so that the transaction overhead doesn't isn't the dominant factor.

Indeed, the size of the pipe is sort of immaterial to your question as you're filling it completely with HTTP overhead; for example if you could double the throughput of the connection, you'd be able to send twice as much useless overhead with effectively no change in usable data.

like image 135
msw Avatar answered Oct 03 '22 01:10

msw