Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

terraform resource s3 upload file is not updated

Tags:

terraform

I am using terraform to upload a file with contents to s3.However, when the content changes, I need to update the s3 file as well. But since the state file stores that the s3 upload was completed, it doesn't upload a new file.

resource "local_file" "timestamp" {
  filename = "timestamp"
  content = "${timestamp()}"
}


resource "aws_s3_bucket_object" "upload" {
 bucket = "bucket"
 key = "date"
 source = "timestamp"
}



expected:

aws_s3_bucket_object change detected aws_s3_bucket_object.timestamp Creating...

result:

aws_s3_bucket_object Refreshing state...

like image 305
ShakyaS Avatar asked Apr 24 '19 15:04

ShakyaS


People also ask

Does S3 upload file overwrite?

By default, when you upload the file with same name. It will overwrite the existing file. In case you want to have the previous file available, you need to enable versioning in the bucket.

Can S3 files be updated?

S3 does not have a concept of updating existing files, you can only overwrite an existing file. When this overwrite happens, S3 considers it as a new file object, or a new version of the file, and that file object gets its own unique version ID.

What is ETAG in Terraform?

You can use the object etag to let Terraform recognize when the content has changed, regardless of the local filename or object path.

Does terraform push my statefile to S3?

within my .tf file, Terraform does NOT push my statefile to S3 (eventhough I initialized Terraform with remote statefile). Is this the original design? Sorry, something went wrong.

Why is terraform not updating my files?

Terraform is supposed to orchestrate and provision your infrastructure and its configuration, not files. That said, terraform is not aware of changes on your files. Unless you change their names, terraform will not update the state. Also, it is better to use local-exec to do that.

How to import S3 bucket in TerraForm?

S3 bucket can be imported using the bucket, e.g. $ terraform import aws_s3_bucket.bucket bucket-name. The policy argument is not imported and will be deprecated in a future version 3.x of the Terraform AWS Provider for removal in version 4.0. Use the aws_s3_bucket_policy resource to manage the S3 Bucket Policy instead.

How to get terraform to recognize changes in file path?

You can use the object etag to let Terraform recognize when the content has changed, regardless of the local filename or object path. The final of these seems closest to what you want in this case. To do that, add the etag argument and set it to be an MD5 hash of the file:


1 Answers

When you give Terraform the path to a file rather than the direct content to upload, it is the name of the file that decides whether the resource needs to be updated, rather than the file's contents.

For a short piece of data as shown in your example, the easiest solution is to specify the data directly in the resource configuration:

resource "aws_s3_bucket_object" "upload" {
 bucket  = "bucket"
 key     = "date"
 content = "${timestamp()}"
}

If your file is actually too large to reasonably load into a string variable, or if it contains raw binary data that cannot be loaded into a string, you can set the etag of the object to an MD5 hash of the content so that the provider can see when the content has changed:

resource "aws_s3_bucket_object" "upload" {
 bucket  = "bucket"
 key     = "date"
 source  = "${path.module}/timestamp"
 etag    = "${filemd5("${path.module}/timestamp")}"
}

By setting the etag, any change to the content of the file will cause this hash result to change and thus allow the provider to detect that the object needs to be updated.

like image 84
Martin Atkins Avatar answered Oct 19 '22 16:10

Martin Atkins