Scenario: I am running an AWS autoscaling group (ASG), and I have changed the associated launch configuration during terraform apply. The ASG stays unaffected.
How do I recreate now the instances in that ASG (i.e., replace them one-by-one to do a rolling replace), which then is based on the changed/new launch configuration?
What I've tried: With terraform taint one can mark resources to be destroyed and recreated during the next apply. However, I don't want to taint the autoscaling group (which is a resource, and single instances are not in this case), but single instances in it. Is there a way to taint single instances or am I thinking in the wrong direction?
In the navigation pane, under Instances, choose Instances. Note: Optionally, you can choose Add a new instance to the Auto Scaling group to balance the load to maintain the group capacity. Select the instance you set to the Standby state. Choose Actions, choose Instance State, and then choose Reboot.
You can trigger an Instance Refresh using the EC2 Auto Scaling groups Management Console, or use the new StartInstanceRefresh API in AWS CLI or any AWS SDK. All you need to do is specify the percentage of healthy instances to keep in the group while ASG terminates and launches instances.
On the navigation pane, under Instances, choose Instances, and then select an instance. Choose Actions, Instance settings, Attach to Auto Scaling Group. On the Attach to Auto Scaling group page, for Auto Scaling Group, select the Auto Scaling group, and then choose Attach.
You cannot solve this with terraform alone. Please see this AWS document (http://docs.aws.amazon.com/autoscaling/latest/userguide/LaunchConfiguration.html):
When you change the launch configuration for your Auto Scaling group, any new instances are launched using the new configuration parameters, but existing instances are not affected.
The normal thing to do here is to use Terraform's lifecycle management to force it to create new resources before destroying the old ones.
In this case you might set your launch configuration and autoscaling group up something like this:
resource "aws_launch_configuration" "as_conf" {
name_prefix = "terraform-lc-example-"
image_id = "${var.ami_id}"
instance_type = "t1.micro"
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "bar" {
name = "terraform-asg-example-${aws_launch_configuration.as_conf.name}"
launch_configuration = "${aws_launch_configuration.as_conf.name}"
lifecycle {
create_before_destroy = true
}
}
Then if you change the ami_id
variable to use another AMI Terraform will realise it has to change the launch configuration and so create a new one before destroying the old one. The new name generated by the new LC is then interpolated in the ASG name forcing a new ASG to be rebuilt.
As you are using create_before_destroy
Terraform will create the new LC and ASG and wait for the new ASG to reach the desired capacity (which can be configured with health checks) before destroying the old ASG and then the old LC.
This will flip all the instances in the ASG at once. So if you had a minimum capacity of 2 in the ASG then this will create 2 more instances and as soon as both of those pass health checks then the 2 older instances will be destroyed. In the event you are using an ELB with the ASG then it will join the 2 new instances to the ELB so, temporarily, you will have all 4 instances in service before then destroying the older 2.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With