I am using dynamoDB in one of my application and i have enabled auto scaling on the table as my request patterns are sporadic.But there is one issue i keep facing, the rate of increase of traffic is much more than the speed of auto scaling. Look at the image below
The bursts are generally missed which causes it to throttle and in some cases, loss of data. Does anyone here faced this before? Any known fixes?
For small bursts like that, it's unlikely you were actually throttled - dynamo gives you a bit of extra burst capacity if you were below threshold for a while -- from DynamoDB Best Practices:
DynamoDB provides some flexibility in the per-partition throughput provisioning. When you are not fully utilizing a partition's throughput, DynamoDB retains a portion of your unused capacity for later bursts of throughput usage. DynamoDB currently retains up to five minutes (300 seconds) of unused read and write capacity.
It looks like autoscaling kicked in after about 10 minutes. That's reasonable according to their documentation FAQ (emphasis added).
Q: How long does it take to change the provisioned throughput level of a table?
In general, decreases in throughput will take anywhere from a few seconds to a few minutes, while increases in throughput will typically take anywhere from a few minutes to a few hours.
We strongly recommend that you do not try and schedule increases in throughput to occur at almost the same time when that extra throughput is needed. We recommend provisioning throughput capacity sufficiently far in advance to ensure that it is there when you need it.
You mention these spikes are causing you loss of data - what kind of retry policy are you using? Have you tried configuring retries beyond the default?
Best alternative is to do your own scaling on a given schedule, and be sure to plan for enough capacity.
It both explains how autoscaling works with cloudwatch metrics and why is it not so 'agressive'. I think this one is what you are looking for as you mentioned in your comments
https://hackernoon.com/the-problems-with-dynamodb-auto-scaling-and-how-it-might-be-improved-a92029c8c10b
AWS Experts told me this: DynamoDB is organized in partitions, and scaling up can require that the partitions are reorganized, by adding additional partitions. This takes time. One way to mitigate this is to create the tables with a provisioned capacity equal to the max provisioned capacity, and once the tables are created, reduce the capacity to the actual values. This will get the partition scheme in place that can support higher capacity levels, and scaling up can happen quicker without a reorg.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With