Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to configure `Terraform` to upload zip file to `s3` bucket and then deploy them to lambda

I use TerraForm as infrastructure framework in my application. Below is the configuration I use to deploy python code to lambda. It does three steps: 1. zip all dependencies and source code in a zip file; 2. upload the zipped file to s3 bucket; 3. deploy to lambda function.

But what happens is the deploy command terraform apply will fail with below error:

Error: Error modifying Lambda Function Code quote-crawler: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
    status code: 400, request id: 2db6cb29-8988-474c-8166-f4332d7309de

  on config.tf line 48, in resource "aws_lambda_function" "test_lambda":
  48: resource "aws_lambda_function" "test_lambda" {



Error: Error modifying Lambda Function Code praw_crawler: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
    status code: 400, request id: e01c83cf-40ee-4919-b322-fab84f87d594

  on config.tf line 67, in resource "aws_lambda_function" "praw_crawler":
  67: resource "aws_lambda_function" "praw_crawler" {

It means the deploy file doesn't exist in s3 bucket. But it success in the second time when I run the command. It seems like a timing issue. After upload the zip file to s3 bucket, the zip file doesn't exist in s3 bucket. That's why the first time deploy failed. But after a few seconds later, the second command finishes successfully and very quick. Is there anything wrong in my configuration file?

The full terraform configuration file can be found: https://github.com/zhaoyi0113/quote-datalake/blob/master/config.tf

like image 257
Joey Yi Zhao Avatar asked Jan 26 '23 01:01

Joey Yi Zhao


1 Answers

You need to add dependency properly to achieve this, Otherwise, it will crash.

First Zip the files

# Zip the Lamda function on the fly
data "archive_file" "source" {
  type        = "zip"
  source_dir  = "../lambda-functions/loadbalancer-to-es"
  output_path = "../lambda-functions/loadbalancer-to-es.zip"
}

then upload it s3 by specifying it dependency which zip,source = "${data.archive_file.source.output_path}" this will make it dependent on zip

# upload zip to s3 and then update lamda function from s3
resource "aws_s3_bucket_object" "file_upload" {
  bucket = "${aws_s3_bucket.bucket.id}"
  key    = "lambda-functions/loadbalancer-to-es.zip"
  source = "${data.archive_file.source.output_path}" # its mean it depended on zip
}

Then you are good to go to deploy Lambda, To make it depened just this line do the magic s3_key = "${aws_s3_bucket_object.file_upload.key}"

  resource "aws_lambda_function" "elb_logs_to_elasticsearch" {
  function_name = "alb-logs-to-elk"
  description   = "elb-logs-to-elasticsearch"
  s3_bucket   = "${var.env_prefix_name}${var.s3_suffix}"
  s3_key      = "${aws_s3_bucket_object.file_upload.key}" # its mean its depended on upload key
  memory_size = 1024
  timeout     = 900
  timeouts {
  create = "30m"
  }
  runtime          = "nodejs8.10"
  role             = "${aws_iam_role.role.arn}"
  source_code_hash = "${base64sha256(data.archive_file.source.output_path)}"
  handler          = "index.handler"

}
like image 189
Adiii Avatar answered May 28 '23 04:05

Adiii