Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Limit concurrent invocation of a AWS Lambda triggered from AWS SQS (Reserved concurrency ignored)?

To me this seemed like a simple use case when I started, but it turned out a lot harder than I had anticipated.

Problem

I have an AWS SQS acting as a job queue that triggers a worker AWS Lambda. However since the worker lambdas are sharing non-scalable resources it is important to limit the number of concurrent running lambdas to (for the sake of example) no more than 5 lambdas running simultaneously.

Simple enough, according to Managing Concurrency for a Lambda Function

Reserved concurrency also limits the maximum concurrency for the function, and applies to the function as a whole

However, setting the Reserved concurrency-property to 5 seems to be completely ignored by SQS, with the queue Messages in Flight-property in my case showing closer to 20-30 concurrent executions depending on the amount of messages put into the queue.

Question

The closest I have come to a solution is to use a SQS FIFO queue and setting the MessageGroupId to a value of either randomly selecting or alternating between 1-5. However, due to uneven workload this is not optimal as it would be better to have the concurrency distributed by actual workload rather than by chance.

I have also tried using the AWS Step Functions as the Map-state has a MaxConcurrency parameter, which seemed to work well on small job queues, but due to each state having an input/output limit of 32kb, this was not feasible in my use-case.

Has anyone found a better or alternative solution? Are there any other ways Reserved concurrency is supposed to be used?

Similar

Here are some similar questions I have found, but I think my question is different because I am not interested in limiting the total number of invocation, and (although I have not tried it myself) I can not see why triggers from S3 or Kinesis Steam would behave different from SQS.

like image 985
Adelost Avatar asked Nov 16 '22 13:11

Adelost


1 Answers

According to AWS docs AWS SQS doesn't take into account reserved concurrency. If number of batches to be processed is greater than reserved concurrency, your messages might end up in a dead-letter queue:

If your function returns an error, or can't be invoked because it's at maximum concurrency, processing might succeed with additional attempts. To give messages a better chance to be processed before sending them to the dead-letter queue, set the maxReceiveCount on the source queue's redrive policy to at least 5. https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html

You can check this article for details: https://zaccharles.medium.com/lambda-concurrency-limits-and-sqs-triggers-dont-mix-well-sometimes-eb23d90122e0

like image 183
Moose on the Loose Avatar answered Dec 29 '22 12:12

Moose on the Loose