Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to load a pickle file from S3 to use in AWS Lambda?

Tags:

I am currently trying to load a pickled file from S3 into AWS lambda and store it to a list (the pickle is a list).

Here is my code:

import pickle import boto3  s3 = boto3.resource('s3') with open('oldscreenurls.pkl', 'rb') as data:     old_list = s3.Bucket("pythonpickles").download_fileobj("oldscreenurls.pkl", data) 

I get the following error even though the file exists:

FileNotFoundError: [Errno 2] No such file or directory: 'oldscreenurls.pkl' 

Any ideas?

like image 715
mifin Avatar asked Feb 24 '18 14:02

mifin


People also ask

How do I transfer files from S3 bucket to Lambda?

Add below Bucket Access policy to the IAM Role created in Destination account. Lambda function will assume the Role of Destination IAM Role and copy the S3 object from Source bucket to Destination. In the Lambda console, choose Create a Lambda function. Directly move to configure function.

Can Lambda read from S3?

The S3 object key and bucket name are passed into your Lambda function via the event parameter. You can then get the object from S3 and read its contents.

How do I get data from S3 bucket in Python lambda?

Create a Lambda Function to transform data for your use case. Create an S3 Object Lambda Access Point from the S3 Management Console. Select the Lambda function that you created above. Provide a supporting S3 Access Point to give S3 Object Lambda access to the original object.

Can Lambda download code from S3?

AWS Lambda now has support for uploading code directly from S3, without requiring you to first download it to a client.

How to read files from S3 bucket using Python Lambda?

We will use boto3 apis to read files from S3 bucket. Read a file from S3 using Python Lambda Function. List and read all files from a specific S3 prefix using Python Lambda Function. Login to AWS account and Navigate to AWS Lambda Service.

How do I upload S3 files to a lambda function?

Next, navigate to the Configuration tab of your lambda function and choose Environment variables to edit the variables. Add the BUCKET_NAME environment variable by setting the value to an existing S3 bucket. Our function will upload the S3 files to this bucket.

How to get S3 data from AWS S3 bucket?

Create the file_key to hold the name of the S3 object. You can prefix the subfolder names, if your object is under any subfolder of the bucket. Concatenate bucket name and the file key to generate the s3uri. Use the read_csv () method in awswrangler to fetch the S3 data using the line wr.s3.read_csv (path=s3uri).

How to load data from AWS S3 to SageMaker?

SageMaker provides the compute capacity to build, train and deploy ML models. You can load data from AWS S3 to SageMaker to create, train and deploy models in SageMaker. You can load data from AWS S3 into AWS SageMaker using the Boto3 library. In this tutorial, you’ll learn how to load data from AWS S3 into SageMaker jupyter notebook.


1 Answers

Super simple solution

import pickle import boto3  s3 = boto3.resource('s3') my_pickle = pickle.loads(s3.Bucket("bucket_name").Object("key_to_pickle.pickle").get()['Body'].read()) 
like image 153
kindjacket Avatar answered Sep 22 '22 16:09

kindjacket