I'm using Terraform to launch my cloud environments.
It seems that even minor configuration change affects many of the resources behind the scenes.
For example, In cases where I create AWS instances - a small change will lead to auto-generation of all the instances:
-/+ aws_instance.DC (new resource required)
id: "i-075deb0aaa57c2d" => <computed> (forces new resource) <----- How can we avoid that?
ami: "ami-01e306baaaa0a6f65" => "ami-01e306baaaa0a6f65"
arn: "arn:aws:ec2:ap-southeast-2:857671114786:instance/i-075deb0aaa57c2d" => <computed>
associate_public_ip_address: "false" => <computed>
availability_zone: "ap-southeast-2a" => <computed>
.
.
My question is related specifically to AWS as the provider:
How can we avoid the destruction/creation of resources each time?
Maybe a relevant flag in Terraform?
Related threads:
Terraform > ipv6_address_count: "" => "0" (forces new resource)
terraform > forces new resource on security group
Edit:
Diving inside the plan output it seems that there was a change in one of the resources:
security_groups.#: "0" => "1" (forces new resource)
security_groups.837544107: "" => "sg-0892062659392afa9" (forces new resource)
Question is still relevant from the perspective of how to avoid the re-creation.
When you want Terraform to ignore changes between subsequent apply commands you can use the lifecycle ignore_changes meta-argument. The ignore_changes argument means that Terraform will set the value when the resource is first deployed and then forever ignore any changes to it.
To prevent destroy operations for specific resources, you can add the prevent_destroy attribute to your resource definition. This lifecycle option prevents Terraform from accidentally removing critical resources. Add prevent_destroy to your EC2 instance. Run terraform destroy to observe the behavior.
If you know that an object is damaged, or if you want to force Terraform to replace it for any other reason, you can override Terraform's default behavior using the -replace=... planning option when you run either terraform plan or terraform apply : $ terraform apply -replace=aws_instance.
Terraform resources only force a new resource if there's no clear upgrade path when modifying a resource to match the new configuration. This is done at the provider level by setting the ForceNew: true
flag on the parameter.
An example is shown with the ami
parameter on the aws_instance
resource:
Schema: map[string]*schema.Schema{
"ami": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
This tells Terraform that if the ami
parameter is changed then it shouldn't attempt to perform an update but instead destroy the resource and create a new one.
You can override the destroy then create behaviour with the create_before_destroy
lifecycle configuration block:
resource "aws_instance" "example" {
# ...
lifecycle {
create_before_destroy = true
}
}
In the event you changed the ami
or some other parameter that can't be updated then Terraform would then create a new instance and then destroy the old one.
How you handle zero downtime upgrades of resources can be tricky and is largely determined on what the resource is and how you handle it. There's some more information about that in the official blog.
In your very specific use case with it being the security_groups
that has changed this is mentioned on the aws_instance
resource docs:
NOTE: If you are creating Instances in a VPC, use vpc_security_group_ids instead.
This is because Terraform's AWS provider and the EC2 API that Terraform is using is backwards compatible with old EC2 Classic AWS accounts that predate VPCs. With those accounts you could create instances outside of VPCs but you couldn't change the security groups of the instance after it was created. If you wanted to change ingress/egress for the instance you needed to work within the group(s) you had attached to the instance already. With VPC based instances AWS allowed users to modify instance security groups without replacing the instance and so exposed a different way of specifying this in the API.
If you move to using vpc_security_group_ids
instead of security_groups
then you will be able to modify these without replacing your instances.
I got the same issue I have replaced security_groups
with vpc_security_group_ids
and the issue is resolved.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With