I have an AWS Lambda (python) that is triggered by SQS events. If the lambda fails , SQS retries based on the retry settings. How can I make changes to the retry settings to enable exponential backoff?
ExponentialBackoff is my utility class where the code that calculates and sets the visibility timeout lives. It also has some other utility functions that are not essential for this demonstration. There you have it; A bare bones exponential backoff implementation for AWS Lambda.
In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries.
Dead-letter queues and message replayOften these failed messages are caused by application errors. For example, a consumer application fails to parse a message correctly and throws an unhandled exception. This exception then triggers an error response that sends the message to the DLQ.
I am not sure you can use an Exponential Backoff if you are using the SQS Trigger because, behind the scenes, this is not essentially a trigger. Lambda keeps polling the SQS queue for messages instead.
SQS will make the message invisible for whatever period is defined in the Visibility Timeout
attribute, meaning every time a Lambda function picks up a new message, this timeout will be respected before the message is visible by other consumers again.
This leaves you with two options:
1) Don't use the Lambda trigger and poll the queue yourself. Keep in mind that you will also have to manually delete the messages if that's the case.
2) Increase the Visibility Timeout on your source SQS Queue in such a way that the timeout is enough for potential failing systems to recover.
More information on how Lambda processes events from AWS Services can be found in the docs
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With