Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Terraform template re-creating all resources after manually changing EBS vol

I have terraform template that creates an EC2 instance for our 6 backends applications and adds security group rules accordingly so that it can connect to required resources. It also creates 6 load balancers(ALBs) that we use to expose our backends to outside.

Last week our production instance experienced a status check failure due to disk space reaching 100% caused by a continuous error log. During this incident we had to recover our production instance using a recovery EC2 instance and we had to perform a manual capacity increase in our production instances EBS volume.

Then we tried updating our current terraform template to match the new EBS volume size but however it's going to destroy all of our production resources and going to recreate them again during this process.

I'm trying to figure out a way to somehow avoid template re-creating all of the resource back again and get the template up-to-date to match the new EBS volume capacity.

Below is the code for EC2 instance creation.

resource "aws_instance" "ec2" {
  ami = "${var.ami_id}"
  instance_type = "${var.instance_type}"
  key_name = "${var.key_pair_name}"
  subnet_id = "${var.private_subnet_id}"
  iam_instance_profile = "${aws_iam_instance_profile.iam_instance_profile.name}"

  /*
   * CAUTION: changing value of below fields will cause the EC2 instance to be terminated and
   * re-created. Think before running the "apply" command.
   */
  associate_public_ip_address = false

  tags = {
    Environment = "${var.env}"
    Project = "${var.project}"
    Provisioner="different-box"
    Name = "${local.name}"
  }

  root_block_device {
    volume_type = "standard"
    volume_size = 50
  }
}

Even if I update the volume_size to match the new size 100, it still going to re-create all of the resources.

Plan output

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # aws_instance.ec2 must be replaced
-/+ resource "aws_instance" "ec2" {
        ami                          = "ami-09d1383e2a5ae8a93"
      ~ arn                          = "arn:aws:ec2:us-west-2:289914521333:instance/i-0ffa0d29b8fc91930" -> (known after apply)
        associate_public_ip_address  = false
      ~ availability_zone            = "us-west-2a" -> (known after apply)
      ~ cpu_core_count               = 1 -> (known after apply)
      ~ cpu_threads_per_core         = 2 -> (known after apply)
      - disable_api_termination      = false -> null
      - ebs_optimized                = false -> null
        get_password_data            = false
      - hibernation                  = false -> null
      + host_id                      = (known after apply)
        iam_instance_profile         = "iam_instance_profile_prod"
      ~ id                           = "i-0ffa0d29b8fc91930" -> (known after apply)
      ~ instance_state               = "running" -> (known after apply)
        instance_type                = "t3.large"
      ~ ipv6_address_count           = 0 -> (known after apply)
      ~ ipv6_addresses               = [] -> (known after apply)
        key_name                     = "dev_different"
      - monitoring                   = false -> null
      + network_interface_id         = (known after apply)
      + password_data                = (known after apply)
      + placement_group              = (known after apply)
      ~ primary_network_interface_id = "eni-061cb6a5ca9240438" -> (known after apply)
      ~ private_dns                  = "ip-172-31-72-30.us-west-2.compute.internal" -> (known after apply)
      ~ private_ip                   = "172.31.72.30" -> (known after apply)
      + public_dns                   = (known after apply)
      + public_ip                    = (known after apply)
      ~ security_groups              = [
          - "default",
          - "different-box.prod-sg",
        ] -> (known after apply)
        source_dest_check            = true
        subnet_id                    = "subnet-00beb1529c4ff05af"
        tags                         = {
            "Environment" = "prod"
            "Name"        = "different-box.prod"
            "Project"     = "different-box"
            "Provisioner" = "different-box"
        }
      ~ tenancy                      = "default" -> (known after apply)
      ~ volume_tags                  = {} -> (known after apply)
      ~ vpc_security_group_ids       = [
          - "sg-0844f9cd4fb14d5d9",
          - "sg-97ef74ef",
        ] -> (known after apply)

      - credit_specification {
          - cpu_credits = "unlimited" -> null
        }

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + snapshot_id           = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }

      + ephemeral_block_device {
          + device_name  = (known after apply)
          + no_device    = (known after apply)
          + virtual_name = (known after apply)
        }

      + network_interface {
          + delete_on_termination = (known after apply)
          + device_index          = (known after apply)
          + network_interface_id  = (known after apply)
        }

      ~ root_block_device {
          ~ delete_on_termination = false -> true # forces replacement
          ~ encrypted             = false -> (known after apply)
          ~ iops                  = 0 -> (known after apply)
          + kms_key_id            = (known after apply)
          ~ volume_id             = "vol-01d0d03d564cf44d6" -> (known after apply)
            volume_size           = 100
            volume_type           = "standard"
        }
    }

  # aws_network_interface_sg_attachment.sg_attachment must be replaced
-/+ resource "aws_network_interface_sg_attachment" "sg_attachment" {
      ~ id                   = "sg-0844f9cd4fb14d5d9_eni-061cb6a5ca9240438" -> (known after apply)
      ~ network_interface_id = "eni-061cb6a5ca9240438" -> (known after apply) # forces replacement
        security_group_id    = "sg-0844f9cd4fb14d5d9"
    }

  # module.alb_admin-mobile-api.aws_alb_target_group_attachment.alb_target_group_attachment must be replaced
-/+ resource "aws_alb_target_group_attachment" "alb_target_group_attachment" {
      ~ id               = "arn:aws:elasticloadbalancing:us-west-2:289914521333:targetgroup/admin-mobile-api-prod-alb-tg/b6940620ef9217f6-20190610084318298800000003" -> (known after apply)
        port             = 1982
        target_group_arn = "arn:aws:elasticloadbalancing:us-west-2:289914521333:targetgroup/admin-mobile-api-prod-alb-tg/b6940620ef9217f6"
      ~ target_id        = "i-0ffa0d29b8fc91930" -> (known after apply) # forces replacement
    }

  # module.alb_admin-portal-backend.aws_alb_target_group_attachment.alb_target_group_attachment must be replaced
-/+ resource "aws_alb_target_group_attachment" "alb_target_group_attachment" {
      ~ id               = "arn:aws:elasticloadbalancing:us-west-2:289914521333:targetgroup/admin-portal-backend-prod-alb-tg/09e967d1703d0c93-20190610084319310500000004" -> (known after apply)
        port             = 3001
        target_group_arn = "arn:aws:elasticloadbalancing:us-west-2:289914521333:targetgroup/admin-portal-backend-prod-alb-tg/09e967d1703d0c93"
      ~ target_id        = "i-0ffa0d29b8fc91930" -> (known after apply) # forces replacement
    }

  # module.alb_api.aws_alb_target_group_attachment.alb_target_group_attachment must be replaced
-/+ resource "aws_alb_target_group_attachment" "alb_target_group_attachment" {
      ~ id               = "arn:aws:elasticloadbalancing:us-west-2:289914521333:targetgroup/api-prod-alb-tg/4cb4a656a520c34d-20190610084318237800000001" -> (known after apply)
        port             = 1984
        target_group_arn = "arn:aws:elasticloadbalancing:us-west-2:289914521333:targetgroup/api-prod-alb-tg/4cb4a656a520c34d"
      ~ target_id        = "i-0ffa0d29b8fc91930" -> (known after apply) # forces replacement
    }

  # module.alb_digitalreign.aws_alb_target_group_attachment.alb_target_group_attachment must be replaced
-/+ resource "aws_alb_target_group_attachment" "alb_target_group_attachment" {
      ~ id               = "arn:aws:elasticloadbalancing:us-west-2:289914521333:targetgroup/digitalreign-prod-alb-tg/c8f0a479686bcaf0-20190610084318291300000002" -> (known after apply)
        port             = 2040
        target_group_arn = "arn:aws:elasticloadbalancing:us-west-2:289914521333:targetgroup/digitalreign-prod-alb-tg/c8f0a479686bcaf0"
      ~ target_id        = "i-0ffa0d29b8fc91930" -> (known after apply) # forces replacement
    }

  # module.alb_engine-ui.aws_alb_target_group_attachment.alb_target_group_attachment must be replaced
-/+ resource "aws_alb_target_group_attachment" "alb_target_group_attachment" {
      ~ id               = "arn:aws:elasticloadbalancing:us-west-2:289914521333:targetgroup/engine-ui-prod-alb-tg/a2aedefc0c88b5e4-20190701134129654000000001" -> (known after apply)
        port             = 2016
        target_group_arn = "arn:aws:elasticloadbalancing:us-west-2:289914521333:targetgroup/engine-ui-prod-alb-tg/a2aedefc0c88b5e4"
      ~ target_id        = "i-0ffa0d29b8fc91930" -> (known after apply) # forces replacement
    }

  # module.alb_example-backend.aws_alb_target_group_attachment.alb_target_group_attachment must be replaced
-/+ resource "aws_alb_target_group_attachment" "alb_target_group_attachment" {
      ~ id               = "arn:aws:elasticloadbalancing:us-west-2:289914521333:targetgroup/example-backend-prod-alb-tg/fa7eb3eb4ac1aa95-20190610084319317500000005" -> (known after apply)
        port             = 2010
        target_group_arn = "arn:aws:elasticloadbalancing:us-west-2:289914521333:targetgroup/example-backend-prod-alb-tg/fa7eb3eb4ac1aa95"
      ~ target_id        = "i-0ffa0d29b8fc91930" -> (known after apply) # forces replacement
    }

  # module.alb_tenant-mobile-api.aws_alb_target_group_attachment.alb_target_group_attachment must be replaced
-/+ resource "aws_alb_target_group_attachment" "alb_target_group_attachment" {
      ~ id               = "arn:aws:elasticloadbalancing:us-west-2:289914521333:targetgroup/tenant-mobile-api-prod-alb-tg/76edfa9edba45f58-20190610084319318900000006" -> (known after apply)
        port             = 1983
        target_group_arn = "arn:aws:elasticloadbalancing:us-west-2:289914521333:targetgroup/tenant-mobile-api-prod-alb-tg/76edfa9edba45f58"
      ~ target_id        = "i-0ffa0d29b8fc91930" -> (known after apply) # forces replacement
    }

Plan: 9 to add, 0 to change, 9 to destroy.

------------------------------------------------------------------------
like image 870
Mahela Wickramasekara Avatar asked Jan 20 '26 05:01

Mahela Wickramasekara


1 Answers

According to the plan output, the instance is being recreated because delete_on_termination has changed for the volume. This likely changed when it was attached to the recovery instance.

While the aws_instance may not support updating the option. But according to the documentation it should be possible.

There are two possible solutions:

  1. If you don't care about deleting the volume on instance termination you could simply add delete_on_termination = false to your root_block_device. Since you have not set it the default is being used (which is true according to the documentation).
  2. Change the DeleteOnTermination parameter to true using the CLI documentation outside of Terraform.
like image 108
Andy Shinn Avatar answered Jan 23 '26 05:01

Andy Shinn



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!