I'm trying to automate the loading of a csv into a MySQL table when it's received into a S3 bucket.
My strategy is that S3 launches an event when it receives a file into a specified bucket (let's call it 'bucket-file'). This is event is notified to an AWS Lambda function that will download and process the file inserting each row into a MySql table (let's call it 'target_table').
We have to take into consideration that RDS is in a VPC.
The current permission configuration of the bucket is:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "PublicReadForGetBucketObjects",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-file/*"
}
]
}
I've created a role with the following policies, AmazonS3FullAccess and AWSLambdaVPCAccessExecutionRole attached to the AWS Lambda function.
The lambda code is:
from __future__ import print_function
import boto3
import logging
import os
import sys
import uuid
import pymysql
import csv
import rds_config
rds_host = rds_config.rds_host
name = rds_config.db_username
password = rds_config.db_password
db_name = rds_config.db_name
logger = logging.getLogger()
logger.setLevel(logging.INFO)
try:
conn = pymysql.connect(rds_host, user=name, passwd=password, db=db_name, connect_timeout=5)
except Exception as e:
logger.error("ERROR: Unexpected error: Could not connect to MySql instance.")
logger.error(e)
sys.exit()
logger.info("SUCCESS: Connection to RDS mysql instance succeeded")
s3_client = boto3.client('s3')
def handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
download_path = '/tmp/{}{}'.format(uuid.uuid4(), key)
s3_client.download_file(bucket, key,download_path)
csv_data = csv.reader(file( download_path))
with conn.cursor() as cur:
for idx, row in enumerate(csv_data):
logger.info(row)
try:
cur.execute('INSERT INTO target_table(column1, column2, column3)' \
'VALUES("%s", "%s", "%s")'
, row)
except Exception as e:
logger.error(e)
if idx % 100 == 0:
conn.commit()
conn.commit()
return 'File loaded into RDS:' + str(download_path)
I've been testing the function and S3 sends the event when a file is uploaded, Lambda connects to the RDS instance and get the notification. I've checked that the bucket name is 'bucket-file' and the filename is also right. The problem is when the function reaches the line s3_client.download_file(bucket, key,download_path)
where it gets stuck until the lamdba expiration time is reached.
Watching the logs it says:
[INFO] 2017-01-24T14:36:52.102Z SUCCESS: Connection to RDS mysql instance succeeded
[INFO] 2017-01-24T14:36:53.282Z Starting new HTTPS connection (1): bucket-files.s3.amazonaws.com
[INFO] 2017-01-24T14:37:23.223Z Starting new HTTPS connection (2): bucket-files.s3.amazonaws.com
2017-01-24T14:37:48.684Z Task timed out after 60.00 seconds
I've also read that if you are working within a VPC, in order to access S3 bucket you have to create a VPC Endpoint that grants access to S3 for this subnet. I've also tried this solution but the result is the same.
I'd appreciate some ideas.
Thanks in advance!
I finally got it!
The problem was the VPC issue. As I said, I created an VPC Endpoint to make S3 service accessible form my VPC, but I had my route table wrongly configured.
So, in conclusion, if you are working in a VPC with lambda and you want to access to S3, you need to create a VPC Endpoint. Besides, if you want to access any other internet service outside your VPC, you need to configure a NAT Gateway.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With