I am attempting to update Redshift from a Lambda function using python. To do this, I am attempting to combine 2 code fragments. Both fragments are functional when I run them separately.
Updating Redshift from PyDev for Eclipse
import psycopg2
conn_string = "dbname='name' port='0000' user='name' password='pwd' host='url'"
conn = psycopg2.connect(conn_string)
cursor = conn.cursor()
cursor.execute("UPDATE table SET attribute='new'")
conn.commit()
cursor.close()
Receiving Content Uploaded to S3 Bucket (Pre-Built Template Available on Lambda)
from __future__ import print_function
import json
import urllib
import boto3
print('Loading function')
s3 = boto3.client('s3')
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
# Get the object from the event and show its content type
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key']).decode('utf8')
try:
response = s3.get_object(Bucket=bucket, Key=key)
print("CONTENT TYPE: " + response['ContentType'])
return response['ContentType']
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
raise e
Since both of these segments worked, I tried to combine them so that I could update Redshift upon the upload of a file to s3:
from __future__ import print_function
import json
import urllib
import boto3
import psycopg2
print('Loading function')
s3 = boto3.client('s3')
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
# Get the object from the event and show its content type
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key']).decode('utf8')
conn_string = "dbname='name' port='0000' user='name' password='pwd' host='url'"
conn = psycopg2.connect(conn_string)
cursor = conn.cursor()
cursor.execute("UPDATE table SET attribute='new'")
conn.commit()
cursor.close()
try:
response = s3.get_object(Bucket=bucket, Key=key)
print("CONTENT TYPE: " + response['Body'].read())
return response['Body'].read()
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
raise e
Since I am using an outside library, I need to create a deployment package. I created a new folder (lambda_function1) and moved my .py file (lambda_function1.py) to that folder. I ran the following command to install psycopg2 in that folder:
pip install psycopg2 -t \lambda_function1
I receive the following feedback:
Collecting psycopg2
Using cached psycopg2-2.6.1-cp34-none-win_amd64.whl
Installing collected packages: psycopg2
Successfully installed psycopg2-2.6.1
I then zipped the contents of the directory. And uploaded that zip to my lambda function. When I upload a document to the bucket the function monitors, I receive the following error in my cloudwatch log:
Unable to import module 'lambda_function1': No module named _psycopg
When I look in the library, the only thing named "_psycopg" is "_psycopg.pyd".
What is causing this problem? Does it matter that Lambda uses Python 2.7 when I use 3.4? Does it matter that I zipped the contents of my file on a Windows machine? Has anyone been able to successfully connect to Redshift from lambda?
The easiest way to install psycopg2 is to use the pip command. This command will install packages from the Python Package Index, or 'pypi'. NOTE: This command will install the latest version of psycopg2 . If you want a particular version, just add a version number after the word psycopg2 , like this: =2.7.
Further details on psycopg can be read on the official link . The first step in order to set up a Python Redshift connection is to set up proper configuration using standard database access points as shown below by the script of code. The second step is to wrap the above configuration into a function so that any errors might be handled properly.
As aws supports psycopg2 but the way to import psycopg2 is a bit different as aws itself has a compiled library for psycopg2, so we need to import is using aws-psycopg2 Show activity on this post. Another way to use psycopg2 on lambda (if you are programing on windows and using python 3.6 on lambda)
The steps for updating data are similar to the steps for inserting data into a PostgreSQL table. First, connect to the PostgreSQL database server by calling the connect () function of the psycopg module. The connect () method returns a new connection object. Next, create a new cursor object by calling the cursor () method of the connection object.
Install and import psycopg2 module. Import using a import psycopg2 statement so you can use this module’s methods to communicate with the PostgreSQL database. Use the psycopg2.connect () method with the required arguments to connect MySQL. It would return an Connection object if the connection established successfully
In order for this to work you need to build psycopg2
with statically linked libpq.so
library. Check out this repo https://github.com/jkehler/awslambda-psycopg2. It has already build psycopg2 package and instructions how to build it yourself.
Back to your questions:
What is causing this problem?
psycopg2
needs to be build an compiled with statically linked libraries for Linux.
Does it matter that Lambda uses Python 2.7 when I use 3.4?
Yes it does, lambda only supports 2.7 version. Just create virtual environment and install all necessary packages in there.
Does it matter that I zipped the contents of my file on a Windows machine?
As long as all the libraries you zipped could ran on Linux it doesn't
Has anyone been able to successfully connect to Redshift from lambda?
yes.
I just came across this same problem. I stumbled across the same github project that was noted in the other answer which explained the problem as follows:
Due to AWS Lambda missing the required PostgreSQL libraries in the AMI image, we needed to compile psycopg2 with the PostgreSQL libpq.so library statically linked libpq library instead of the default dynamic link.
This has been noted in the previous answer, and I started to follow the instructions to build myself a version of psycopg2 with a statically linked PostgreSQL library. I found a much easier option though. I noticed on the psycopg2 github page the following:
You can also obtain a stand-alone package, not requiring a compiler or external libraries, by installing the psycopg2-binary package from PyPI:
$ pip install psycopg2-binary
The binary package is a practical choice for development and testing but in production it is advised to use the package built from sources.
When I pip installed the psycopg2-binary package and included it in my requirements.txt file I was able to connect to my postgresql database from a lambda function flawlessly. I am using chalice which I highly recommend. I realize that psycopg2 recommends not using the binary version for production, but I don't see a huge difference between using the binary version or compiling and statically linking it yourself. Someone please correct me if I'm wrong on that.
To use psycopg2 with aws lambda, use import aws-psycopg2
As aws supports psycopg2 but the way to import psycopg2 is a bit different as aws itself has a compiled library for psycopg2, so we need to import is using aws-psycopg2
oh boy! while some of the answers may be really great and working! Just stumbled upon this https://pypi.org/project/aws-psycopg2/ and it worked like a charm for me. steps :
mkdir aws-psycopg2
cd aws-psycopg2
vi get_layer_packages.sh
export PKG_DIR="python"
rm -rf ${PKG_DIR} && mkdir -p ${PKG_DIR}
docker run --rm -v $(pwd):/foo -w /foo lambci/lambda:build-python3.6 \
pip install -r requirements.txt --no-deps -t ${PKG_DIR}
vi requirements.txt
aws-psycopg2
then do : chmod +x get_layer_packages.sh
./get_layer_packages.sh
zip -r aws-psycopg2.zip .
upload this zip to the AWS Lambda Layer!
Another way to use psycopg2 on lambda (if you are programing on windows and using python 3.6 on lambda)
This is the simplest way I found in all the methods I tried:
I added a lamdba layer to my lambda functions that needed psycopg2. Here is a list of available Lambda layers: https://github.com/jetbridge/psycopg2-lambda-layer
I'm using the serverless framework and this is what my Lambda function looks like:
functions:
example:
handler: handler.example
layers:
- arn:aws:lambda:us-east-1:898466741470:layer:psycopg2-py37:3
events:
- http:
path: example
method: post
authorizer: aws_iam
cors: true
Assuming your packaging is correct, the no module named psycopg2
error typically indicates the binary file(s) of your psycopg2 deployment is incorrect for your target OS or Python version.
For Lambdas, we have found that the psycopg2 binary works (using manylinux_x86_64). There is a reported risk of segfault due to the presence of competing libssl binaries though we haven't had that yet. (this is basically a +1 for jshammon's answer above)
The "proper solution" is probably jkehlers recompile specifically for Lambda missing only lib_pq.so
, but it doesn't currently support ssl+py3.7 and we are too Windows to recompile it ourselves.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With