I am trying to read the content of a csv file which was uploaded on an s3 bucket. To do so, I get the bucket name and the file key from the event that triggered the lambda function and read it line by line. Here is my code:
import json
import os
import boto3
import csv
def lambda_handler(event, context):
for record in event['Records']:
bucket = record['s3']['bucket']['name']
file_key = record['s3']['object']['key']
s3 = boto3.client('s3')
csvfile = s3.get_object(Bucket=bucket, Key=file_key)
csvcontent = csvfile['Body'].read().split(b'\n')
data = []
with open(csvfile['Body'], 'r') as csv_file:
csv_file = csv.DictReader(csv_file)
data = list(csv_file)
The exact error I’m getting on the CloudWatch is:
[ERROR] TypeError: expected str, bytes or os.PathLike object, not list
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 19, in lambda_handler
with open(csvcontent, 'r') as csv_file:
Could someone help me fix this? I appreciate any help you can provide as I am new to lambda
To get the CSV file data from s3 bucket in the proper and with easy to retrieve index format below code helped me a lot:
key = 'key-name'
bucket = 'bucket-name'
s3_resource = boto3.resource('s3')
s3_object = s3_resource.Object(bucket, key)
data = s3_object.get()['Body'].read().decode('utf-8').splitlines()
lines = csv.reader(data)
headers = next(lines)
print('headers: %s' %(headers))
for line in lines:
#print complete line
print(line)
#print index wise
print(line[0], line[1])
csvfile = s3.get_object(Bucket=bucket, Key=file_key)
csvcontent = csvfile['Body'].read().split(b'\n')
Here you have already retrieved the file contents and split it into lines. I'm not sure why you're trying to open
something again, you can just pass csvcontent
into your reader:
csv_data = csv.DictReader(csvcontent)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With