I need to have some lambda function which got this flow:
Triggered S3 put file event -> lambda function -> insert row to DynamoDB
When I'm creating a test using AWS from the lambda screen, I got this example with only 1 record in the Records
list:
{
"Records": [ // <<<----------------only 1 element
{
"eventVersion": "2.0",
"eventTime": "1970-01-01T00:00:00.000Z",
"requestParameters": {
"sourceIPAddress": "127.0.0.1"
},
"s3": {
"configurationId": "testConfigRule",
"object": {
"eTag": "0123456789abcdef0123456789abcdef",
"sequencer": "0A1B2C3D4E5F678901",
"key": "HappyFace.jpg",
"size": 1024
},
"bucket": {
"arn": "arn:aws:s3:::mybucket",
"name": "roeyg-oregon-s3-bucket",
"ownerIdentity": {
"principalId": "EXAMPLE"
}
},
"s3SchemaVersion": "1.0"
},
"responseElements": {
"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH",
"x-amz-request-id": "EXAMPLE123456789"
},
"awsRegion": "us-east-1",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "EXAMPLE"
},
"eventSource": "aws:s3"
}
]
}
I tried several ways to see if it's possible to get a list with more than one element in this list, like using CLI or uploading several files together or even a whole folder, in all of these scenarios I got one item per one event.
My question is to know if there can be a scenario in which I can get more than one file at one event?
By that, I would change my code to have a loop and not a reference to the first element like AWS suggested.
You can also increase concurrency by processing multiple batches from each shard in parallel. Lambda can process up to 10 batches in each shard simultaneously. If you increase the number of concurrent batches per shard, Lambda still ensures in-order processing at the shard level.
But currently the S3 event doesn't support multiple lambda trigger.
At the time that you create a Lambda function, you can specify only one trigger.
Amazon S3 can send an event to a Lambda function when an object is created or deleted. You configure notification settings on a bucket, and grant Amazon S3 permission to invoke a function on the function's resource-based permissions policy.
Given your current configuration, you will only get one record per invocation. This is because each distinct S3 Put Event triggers a distinct S3 event notification with your AWS Lambda function as the recipient. Lambda can then process up to 100 concurrent executions by default, a limit that can be automatically raised depending on your incoming event rate.
AWS Lambda receives Records
as a collection because it is using S3 Event Notifications to send the event to lambda, and thus using the S3 notification event message structure. In anticipation of event types that may return more than one record, the format exists as a collection. You can see a full list of S3 event notification types here.
This doesn't impact S3 notifications from events like s3:ObjectCreated:Put
, so you're fine to leave the function as-is because that is how Put notifications are intended to work -- one event -> one notification -> one invokation
.
That said, if you still want your code to be able to handle multiple Records per invokation, there is no harm in writing it to either:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With