I am writing a lambda function, whose goal is to download a .json file from s3, modify its contents, then reupload to the same bucket under a different key.
So in my s3, I have a 'cloud' bucket cloud/folder/foo.json
>>> foo.json
{
"value1": "abc",
"value2": "123"
}
I want to download it, change a couple of things accordingly and re-upload it to the same place as bar.json
I have the first part sort of working, in that it downloads the contents of the file and modifies the contents, but everything is now a python dictionary object.
import boto3
import json
def get_json():
client = boto3.client('s3')
response = client.get_object(Bucket='cloud', Key='folder/foo.json')
data = response['Body'].read()
bar = json.loads(data)
bar["value-1"] = "do-re-mi"
#TODO: implement uploading here
def lambda_handler(event, context):
get_json()
return 'Hello from Lambda'
So now...
>>> bar
{
"value1": "do-re-mi",
"value2": "123"
}
The bar variable is correct, but is a dictionary object. How can I go about directly uploading to that bucket as bar.json? I saw other examples here but I am not keen on putting in my AWS secret or access keys anywhere. I assume because I am using lambda I cannot create a file on the machine, when I try to do something like below:
g = open('myfile.json', 'w')
g.write(json.dumps(bar, indent=4, sort_keys=True))
g.close()
with open('myfile.json', 'rb') as f:
client.upload_fileobj(f, 'cloud', 'bar.json')
I get a "errorType": "IOError", "errorMessage": "[Errno 30] Read-only file system: 'myfile.json'"
Any advice would be greatly appreciated. Thanks!
Thanks to monchitos82 I have learned that you can write to /tmp in lambda. So all I had to do was add that to the beginning of my files and it worked.
g = open('/tmp/myfile.json', 'w')
g.write(json.dumps(bar, indent=4, sort_keys=True))
g.close()
with open('/tmp/myfile.json', 'rb') as f:
client.upload_fileobj(f, 'cloud', 'bar.json')
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With