When we try to run a terraform script with remote state handling we get the below issue:
Error refreshing state: state data in S3 does not have the expected content. This may be caused by unusually long delays in S3 processing a previous state update. Please wait for a minute or two and try again. If this problem persists, and neither S3 nor DynamoDB are experiencing an outage, you may need to manually verify the remote state and update the Digest value stored in the DynamoDB table to the following value
Terraform State Locking will make sure that the state is “locked” if it's presently in use by another user. Here, we will be configuring AWS S3 (Simple Storage Service) to store our “tfstate” file, which can be shared with all the Team Members and AWS “Dynamodb” for creating the state locking mechanism.
Terraform will automatically detect that you already have a state file locally and prompt you to copy it to the new S3 backend. Type yes. After running init command, your Terraform state will be stored in the S3 bucket.
There are 3 possible workarounds depending on your specific scenario in order to solve it:
If you have a backup of your AWS S3 terrform.tfstate
file you could restored your state backend "s3" {key = "path/to/terraform.tfstate"}
to an older version. Re try terraform init
and validate if it works well.
Remove the out-of-sync entry in AWS DynamoDB Table. There will be a LockID
entry in the table containing state and expected checksum which you should delete and that will be re-generated after re-running terraform init
.
IMPORTANT CONSIDERATIONS:
terraform refresh
command (https://www.terraform.io/docs/commands/refresh.html), which is used to reconcile the state Terraform knows about (via its state file) with the real-world infrastructure. This can be used to detect any drift from the last-known state, and to update the state file.If after a terraform destroy
you have manually deleted your AWS S3 terraform.tfstate
file and then probably trying to spin up a new instance of all the tfstate declared resources, meaning you're working from scratch, you could just update your AWS S3 terrform.tfstate
state backend key "s3" {key = "path/to/terraform.tfstate"}
to a new one "s3" {key = "new-path/to/terraform.tfstate"}
. Retry terraform init
and validate that this should works well. This workaround has the limitation that you haven't really solved the root cause, you're just by-passing the problem using a new key for the S3 tfstate.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With