Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Ridiculously slow writes to Amazon DynamoDB (PHP API)

This question has been already posted on AWS forums, but yet remains unanswered https://forums.aws.amazon.com/thread.jspa?threadID=94589

I'm trying to to perform an initial upload of a long list of short items (about 120 millions of them), to retrieve them later by unique key, and it seems like a perfect case for DynamoDb.

However, my current write speed is very slow (roughly 8-9 seconds per 100 writes) which makes initial upload almost impossible (it'd take about 3 months with current pace).

I have read AWS forums looking for an answer and already tried the following things:

  1. I switched from single "put_item" calls to batch writes of 25 items (recommended max batch write size), and each of my items is smaller than 1Kb (which is also recommended). It is very typical even for 25 of my items to be under 1Kb as well, but it is not guaranteed (and shouldn't matter anyway as I understand as only single item size is important for DynamoDB).

  2. I use the recently introduced EU region (I'm in the UK) specifying its entry point directly by calling set_region('dynamodb.eu-west-1.amazonaws.com') as there is apparently no other way to do that in PHP API. AWS console shows that the table in a proper region, so that works.

  3. I have disabled SSL by calling disable_ssl() (gaining 1 second per 100 records).

Still, a test set of 100 items (4 batch write calls for 25 items) never takes less than 8 seconds to index. Every batch write request takes about 2 seconds, so it's not like the first one is instant and consequent requests are then slow.

My table provisioned throughput is 100 write and 100 read units which should be enough so far (tried higher limits as well just in case, no effect).

I also know that there are some expenses on request serialisation so I can probably use the queue to "accumulate" my requests, but does that really matter that much for batch_writes? And I don't think that is the problem because even a single request takes too long.

I found that some people modify the cURL headers ("Expect:" particularly) in the API to speed the requests up, but I don't think that is a proper way, and also the API has been updated since that advice was posted.

The server my application is running on is fine as well - I've read that sometimes the CPU load goes through the roof, but in my case everything is fine, it's just the network request that takes too long.

I'm stuck now - is there anything else I can try? Please feel free to ask for more information if I haven't provided enough.

There are other recent threads, apparently on the same problem, here (no answer so far though).

This service is supposed to be ultra-fast, so I'm really puzzled by that problem in the very beginning.

like image 536
Yuriy Avatar asked May 21 '12 09:05

Yuriy


People also ask

Why is DynamoDB so slow?

Because you're in a very high speed network inside amazon the latency is very low even with http. Sure TCP might be "Faster" but that's just if you're comparing the speed of connection in a non-real world. The real benefit that you get with HTTP is that you can scale it out very very easily.

How long does it take to write to DynamoDB?

For 500,000 records, it takes about 15 minutes to insert all the records into the table. Depending on your use case, this might be acceptable. But for bulk inserts of larger datasets — say 100 million rows(!)

How can I speed up DynamoDB Query?

Use caching: If your traffic is read heavy, consider using a caching service, such as Amazon DynamoDB Accelerator (DAX). DAX is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement, from milliseconds to microseconds, even at millions of requests per second.

How can I tell if DynamoDB is throttling?

To find the most accessed and throttled items in your table, use the Amazon CloudWatch Contributor Insights. Amazon CloudWatch Contributor Insights is a diagnostic tool that provides a summarized view of your DynamoDB tables traffic trends and helps you identify the most frequently accessed partition keys.


2 Answers

If you're uploading from your local machine, the speed will be impacted by all sorts of traffic / firewall etc between you and the servers. If I call DynamoDB each request takes 0.3 of a second simply because of the time to travel to/from Australia.

My suggestion would be to create yourself an EC2 instance (server) with PHP, upload the script and all files to the EC2 server as a block and then do the dump from there. The EC2 server shuold have the blistering speed to the DynamoDB server.

If you're not confident about setting up EC2 with LAMP yourself, then they have a new service "Elastic Beanstalk" that can do it all for you. When you've completed the upload, simply burn the server - and hopefully you can do all that within their "free tier" pricing structure :)

Doesn't solve long term issues of connectivity, but will reduce the three month upload!

like image 102
Robbie Avatar answered Sep 25 '22 21:09

Robbie


I would try a multithreaded upload to increase throughput. Maybe add threads one at a time and see if the throughput increases linearly. As a test you can just run two of your current loaders at the same time and see if they both go at the speed you are observing now.

like image 31
Chris Seline Avatar answered Sep 21 '22 21:09

Chris Seline