I am uploading a file from Android device to S3 bucket via this code
TransferUtility trasnferManager = new TransferUtility(s3, context);
trasnferManager.upload(..,..,..);
After that I have a lambda trigger attached to S3:ObjectCreated event.
When the lambda is executed I am trying to get the file via S3.getObject() function. Unfortunately sometimes I am receiving "NoSuchKey: The specified key does not exist:" error. After that lambda retries couple of times and successfully gets the file and proceeds with its execution.
In my opinion lambda function is executed before the file in S3 is avaivable? But that should not happen by design. The trigger should be triggered after the file upload on S3 is complete.
According to announcement on Aug 4, 2015:
Amazon S3 buckets in all Regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES.
Read-after-write consistency allows you to retrieve objects immediately after creation in Amazon S3.
But prior to this:
All regions except US Standard (renamed to US East (N. Virginia)) supported read-after-write consistency for new objects uploaded to Amazon S3.
My bucket is in US East (N. Virginia) region and it is created before Aug 4, 2015. I don't know that this could be the issue...
EDIT: 20.10.2016
According to documentaion - EVENTUALLY CONSISTENT READ operation may return NO RESULT even if two or more WRITE operations had been completed before it.
In this example, both W1 (write 1) and W2 (write 2) complete before the start of R1 (read 1) and R2 (read 2). For a consistent read, R1 and R2 both return color = ruby. For an eventually consistent read, R1 and R2 might return color = red, color = ruby, or no results, depending on the amount of time that has elapsed.
NoSuchKey means it cannot find the topic. I should've created both the topic and bucket notification using same credentials. But when creating the topic, it allowed me to create without any credential and ended up being created under anonymous user. That is what has caused it to throw NoSuchKey error.
If the requested object was available in the S3 bucket for some time and you receive a 404 NoSuchKey error again, then check the following: Confirm that the request matches the object name exactly, including the capitalization of the object name. Requests for S3 objects are case sensitive.
Amazon S3 can send an event to a Lambda function when an object is created or deleted. You configure notification settings on a bucket, and grant Amazon S3 permission to invoke a function on the function's resource-based permissions policy.
PDF. Retrieves objects from Amazon S3. To use GET , you must have READ access to the object. If you grant READ access to the anonymous user, you can return the object without using an authorization header.
To protect against this issue, S3 SDK waiter can be used. Once notification was received, we can ensure the object is actually there. For example, for the AWS JavaScript SDK you can use the following snippet:
s3.waitFor("objectExists", {
Bucket: "<bucket-name>",
Key: "<object-key>"
}, callback);
Please note that waitFor will increase the execution time of your Lambda, meaning you will need to extend the timeout. According to the documentation, the check will be performed every 5 seconds up to 20 times. So setting the timeout to something around 1 minute should help to avoid execution timeout exceptions.
Link to the documentation: AWS JavaScript SDK S3 Class
Sometimes when the files are large, they are uploaded using multi-part upload and it sends a trigger to lambda before the file is fully uploaded. Presumably, it is related to the event that triggers the Lambda function. In the event field of the lambda function, make sure you add both put and complete multi-part upload to the event.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With