I'm seeing the below error from my lambda function when I drop a file.csv into an S3 bucket. The file is not large and I even added a 60 second sleep prior to opening the file for reading, but for some reason the file has the extra ".6CEdFe7C" appended to it. Why is that?
[Errno 30] Read-only file system: u'/file.csv.6CEdFe7C': IOError Traceback (most recent call last): File "/var/task/lambda_function.py", line 75, in lambda_handler s3.download_file(bucket, key, filepath) File "/var/runtime/boto3/s3/inject.py", line 104, in download_file extra_args=ExtraArgs, callback=Callback) File "/var/runtime/boto3/s3/transfer.py", line 670, in download_file extra_args, callback) File "/var/runtime/boto3/s3/transfer.py", line 685, in _download_file self._get_object(bucket, key, filename, extra_args, callback) File "/var/runtime/boto3/s3/transfer.py", line 709, in _get_object extra_args, callback) File "/var/runtime/boto3/s3/transfer.py", line 723, in _do_get_object with self._osutil.open(filename, 'wb') as f: File "/var/runtime/boto3/s3/transfer.py", line 332, in open return open(filename, mode) IOError: [Errno 30] Read-only file system: u'/file.csv.6CEdFe7C'
Code:
def lambda_handler(event, context): s3_response = {} counter = 0 event_records = event.get("Records", []) s3_items = [] for event_record in event_records: if "s3" in event_record: bucket = event_record["s3"]["bucket"]["name"] key = event_record["s3"]["object"]["key"] filepath = '/' + key print(bucket) print(key) print(filepath) s3.download_file(bucket, key, filepath)
The result of the above is:
mytestbucket file.csv /file.csv [Errno 30] Read-only file system: u'/file.csv.6CEdFe7C'
If the key/file is "file.csv", then why does the s3.download_file method try to download "file.csv.6CEdFe7C"? I'm guessing when the function is triggered, the file is file.csv.xxxxx but by the time it gets to line 75, the file is renamed to file.csv?
The filesystem on AWS Lambda is read-only, except for the /tmp directory. On top of that, the filesystem is not shared between instances of a lambda when it scales up.
If the permissions between a Lambda function and an Amazon S3 bucket are incomplete or incorrect, then Lambda returns an Access Denied error.
Only /tmp
seems to be writable in AWS Lambda.
Therefore this would work:
filepath = '/tmp/' + key
References:
According to http://boto3.readthedocs.io/en/latest/guide/s3-example-download-file.html
The example shows how to use the first parameter for the cloud name and the second parameter for the local path to be downloaded.
in other hand, the amazaon docs, says
Thus, we have 512 MB for create files. Here is my code for me in lambda aws, for me works like charm.
.download_file(Key=nombre_archivo,Filename='/tmp/{}'.format(nuevo_nombre))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With