The question originates in: how much work can I do in the background, when the response has already been sent. For instance: I just want to receive data, tell the client 'ok', and proceed with some database operations that may take some time.
package main
import (
"fmt"
"context"
"github.com/aws/aws-lambda-go/lambda"
)
type MyEvent struct {
Name string `json:"name"`
}
func HandleRequest(ctx context.Context, name MyEvent) (string, error) {
go RecordQuery(name)
return fmt.Sprintf("Hello %s!", name.Name ), nil
}
func RecordQuery(name MyEvent) {
// insert stuff in the database, mark user active,
// log activity, etc..
}
func main() {
lambda.Start(HandleRequest)
}
Can we count on the goroutine to be able to do its work?
When this happens, usually the container freezes, just stops at whatever was doing and consequently ends with a timeout. The lack of memory issue could be caused simply because Lambda needs more memory, so increasing the memory size will fix it.
A low Lambda function timeout can cause healthy connections to be dropped early. If that's happening in your use case, increase the function timeout setting to allow enough time for your API call to get a response.
AWS Lambda reattempts the connection until the queue's conditions for message deletion are met. Whatever the invocation source, repeated throttling by AWS Lambda can result in your requests never actually being run—showing up in production as a bug.
Lambda Retry Behavior Timeout — Lambda running longer than the configured timeout duration is violently closed with a 'Task timed out after … seconds' message. The default value is 6 seconds, and the maximal value is 5 minutes.
It turns out we can't assume the code will run.
Example implementation:
var alreadyLogging bool
func RecordQuery(name MyEvent) {
if alreadyLogging {
return
}
alreadyLogging = true
for i := 0; ; i++ {
time.Sleep(time.Second)
log.Print("Still here ", i)
}
}
Behaviour: as long as the container where the lambda is running is receiving requests, the goroutine will be executed. But all code will stop when the container is no longer receiving requests.
Possible output (in cloudwatch):
2018/05/16 08:50:46 Still here 70
2018/05/16 08:50:47 Still here 71
2018/05/16 08:50:48 Still here 72
2018/05/16 08:50:49 Still here 73
2018/05/16 08:51:36 Still here 74
2018/05/16 08:51:37 Still here 75
2018/05/16 08:51:38 Still here 76
Note that in the Node.js Programming Model, you can request AWS Lambda to freeze the process soon after the callback is called, even if there are events in the event loop: The Context Object Properties.
It would be interesting to see some use cases for this API.
Update: in Node.js, as soon as you connect to databases, you'll have a non-empty event queue. That's why this setting is available.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With