Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Terraform does not destroy a module

I made some experiments with terraform, kubernetes, cassandra and elassandra, I separated all by modules, but now I can't delete a specific module.

I'm using gitlab-ci, and I store the terraform states on a AWS backend. This mean that, every time that I change the infrastructure in terraform files, after a git push, the infrastructure will be updated with an gitlab-ci that run terraform init, terraform plan and terraform apply.

My terraform main file is this:

# main.tf
##########################################################################################################################################
# BACKEND                                                                                                                                #
##########################################################################################################################################

terraform {
  backend "s3" {}
}

data "terraform_remote_state" "state" {
  backend = "s3"
  config {
    bucket         = "${var.tf_state_bucket}"
    dynamodb_table = "${var.tf_state_table}"
    region         = "${var.aws-region}"
    key            = "${var.tf_key}"
  }
}

##########################################################################################################################################
# Modules                                                                                                                                #
##########################################################################################################################################

# Cloud Providers: -----------------------------------------------------------------------------------------------------------------------
module "gke" {
  source    = "./gke"
  project   = "${var.gcloud_project}"
  workspace = "${terraform.workspace}"
  region    = "${var.region}"
  zone      = "${var.gcloud-zone}"
  username  = "${var.username}"
  password  = "${var.password}"
}

module "aws" {
  source   = "./aws-config"
  aws-region      = "${var.aws-region}"
  aws-access_key  = "${var.aws-access_key}"
  aws-secret_key  = "${var.aws-secret_key}"
}

# Elassandra: ----------------------------------------------------------------------------------------------------------------------------
module "k8s-elassandra" {
  source   = "./k8s-elassandra"

  host     = "${module.gke.host}"
  username = "${var.username}"
  password = "${var.password}"

  client_certificate     = "${module.gke.client_certificate}"
  client_key             = "${module.gke.client_key}"
  cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
}

# Cassandra: ----------------------------------------------------------------------------------------------------------------------------
 module "k8s-cassandra" { 
   source   = "./k8s-cassandra"

   host     = "${module.gke.host}"
   username = "${var.username}"
   password = "${var.password}"

   client_certificate     = "${module.gke.client_certificate}"
   client_key             = "${module.gke.client_key}"
   cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
 }

This is a tree of my directory:

.
├── aws-config
│   ├── terraform_s3.tf
│   └── variables.tf
├── gke
│   ├── cluster.tf
│   ├── gcloud_access_key.json
│   ├── gcp.tf
│   └── variables.tf
├── k8s-cassandra
│   ├── k8s.tf
│   ├── limit_ranges.tf
│   ├── quotas.tf
│   ├── services.tf
│   ├── stateful_set.tf
│   └── variables.tf
├── k8s-elassandra
│   ├── k8s.tf
│   ├── limit_ranges.tf
│   ├── quotas.tf
│   ├── services.tf
│   ├── stateful_set.tf
│   └── variables.tf
├── main.tf
└── variables.tf

I'm blocked here:

-> I want to remove the module k8s-cassandra

  • If I comment ou delete the module in main.tf (module "k8s-cassandra" {...), I receive this error:

TERRAFORM PLAN... Acquiring state lock. This may take a few moments... Releasing state lock. This may take a few moments...

Error: module.k8s-cassandra.kubernetes_stateful_set.cassandra: configuration for module.k8s-cassandra.provider.kubernetes is not present; a provider configuration block is required for all operations

  • If I insert terraform destroy -target=module.k8s-cassandra -auto-approve between terraform init and terraform plan stills not working.

Anyone can help me, please? Thanks :)

like image 618
Rui Martins Avatar asked Feb 04 '19 14:02

Rui Martins


People also ask

What is the terraform destroy command?

This command is the inverse of terraform apply in that it terminates all the resources specified in your Terraform state. It does not destroy resources running elsewhere that are not managed by the current Terraform project. Destroy the resources you created. $ terraform destroy An execution plan has been generated and is shown below.

Why is terraform destroying my resources?

Just like with apply, Terraform determines the order to destroy your resources. In this case, Terraform identified a single instance with no other dependencies, so it destroyed the instance. In more complicated cases with multiple resources, Terraform will destroy them in a suitable order to respect dependencies.

Is there an undo option for terraform?

There is no undo. Only 'yes' will be accepted to confirm. Enter a value: The - prefix indicates that the instance will be destroyed. As with apply, Terraform shows its execution plan and waits for approval before making any changes. Answer yes to execute this plan and destroy the infrastructure.

Why is terraform’s provider configuration missing?

Terraform complained that the provider configuration was missing for some resources that needed to be destroyed — the same resources that weren’t in the configuration anymore. It turns out that this can happen after removing a module that had a module-local provider configuration, which we had in our configuration.


1 Answers

The meaning of this error message is that Terraform was relying on a provider "kubernetes" block inside the k8s-cassandra module in order to configure the AWS provider. By removing the module from source code, you've implicitly removed that configuration and so the existing objects already present in the state cannot be deleted -- the provider configuration needed to do that is not present.

Although Terraform allows provider blocks inside child modules for flexibility, the documentation recommends keeping all of them in the root module and passing the provider configurations by name into the child modules using a providers map, or by automatic inheritance by name.

provider "kubernetes" {
  # global kubernetes provider config
}

module "k8s-cassandra" {
  # ...module arguments...

  # provider "kubernetes" is automatically inherited by default, but you
  # can also set it explicitly:
  providers = {
    "kubernetes" = "kubernetes"
  }
}

To get out of the conflict situation you have already though, the answer is to temporarily restore the module "k8s-cassandra" block and then destroy the objects it is managing before removing it, using the -target option:

terraform destroy -target module.k8s-cassandra

Once all of the objects managed by that module have been destroyed and removed from the state, you can then safely remove the module "k8s-cassandra" block from configuration.

To prevent this from happening again, you should rework the root and child modules here so that the provider configurations are all in the root module, and child modules only inherit provider configurations passed in from the root. For more information, see Providers Within Modules in the documentation.

like image 157
Martin Atkins Avatar answered Oct 14 '22 05:10

Martin Atkins