Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Terraform - Upload file to S3 on every apply

I need to upload a folder to S3 Bucket. But when I apply for the first time. It just uploads. But I have two problems here:

  1. uploaded version outputs as null. I would expect some version_id like 1, 2, 3
  2. When running terraform apply again, it says Apply complete! Resources: 0 added, 0 changed, 0 destroyed. I would expect to upload all the times when I run terraform apply and create a new version.

What am I doing wrong? Here is my Terraform config:

resource "aws_s3_bucket" "my_bucket" {
  bucket = "my_bucket_name"

  versioning {
    enabled = true
  }
}

resource "aws_s3_bucket_object" "file_upload" {
  bucket = "my_bucket"
  key    = "my_bucket_key"
  source = "my_files.zip"
}

output "my_bucket_file_version" {
  value = "${aws_s3_bucket_object.file_upload.version_id}"
}
like image 801
Muthaiah PL Avatar asked May 13 '19 07:05

Muthaiah PL


Video Answer


2 Answers

Terraform only makes changes to the remote objects when it detects a difference between the configuration and the remote object attributes. In the configuration as you've written it so far, the configuration includes only the filename. It includes nothing about the content of the file, so Terraform can't react to the file changing.

To make subsequent changes, there are a few options:

  • You could use a different local filename for each new version.
  • You could use a different remote object path for each new version.
  • You can use the object etag to let Terraform recognize when the content has changed, regardless of the local filename or object path.

The final of these seems closest to what you want in this case. To do that, add the etag argument and set it to be an MD5 hash of the file:

resource "aws_s3_bucket_object" "file_upload" {
  bucket = "my_bucket"
  key    = "my_bucket_key"
  source = "${path.module}/my_files.zip"
  etag   = "${filemd5("${path.module}/my_files.zip")}"
}

With that extra argument in place, Terraform will detect when the MD5 hash of the file on disk is different than that stored remotely in S3 and will plan to update the object accordingly.


(I'm not sure what's going on with version_id. It should work as long as versioning is enabled on the bucket.)

like image 179
Martin Atkins Avatar answered Sep 18 '22 19:09

Martin Atkins


You shouldn't be using Terraform to do this. Terraform is supposed to orchestrate and provision your infrastructure and its configuration, not files. That said, terraform is not aware of changes on your files. Unless you change their names, terraform will not update the state.

Also, it is better to use local-exec to do that. Something like:

resource "aws_s3_bucket" "my-bucket" {
# ...

  provisioner "local-exec" {
     command = "aws s3 cp path_to_my_file ${aws_s3_bucket.my-bucket.id}"
  }
}
like image 28
Stargazer Avatar answered Sep 18 '22 19:09

Stargazer