I'm writing a terraform module which should be reused across different environments.
In order to make things simple, here's a basic example of calling a module from one of the environments root module:
##QA-resources.tf
module "some_module" {
source = "./path/to/module"
}
some_variable = ${module.some_module.some_output}
The problem is that when a module was already created Terraform throws an error of:
Error creating [resource-type] [resource-name]: EntityAlreadyExists: [resource-type] with [resource-name] already exists. status code: 409, request id: ...
This is happening when the module was created under the scope of external terraform.tfstate
and one of the resources has a unique field like 'Name'.
In my case, it happened while trying to use an IAM module which already created an role with that specific name, but it can happen in many other cases (I don't want the discussion to be specific to my use case).
I would expect that if one of the module's resources exist, no failure will occur and the module's outputs would be available to the root
module.
Any suggestions how to manage this (maybe using specific command or a flag)?
A few related threads I found:
Terraform doesn't reuse an AWS Role it just created and fails?
what is the best way to solve EntityAlreadyExists error in terraform?
Terraform error EntityAlreadyExists: Role with name iam_for_lambda already exists
For @Martin Atkins request here's the resource which caused the error.
It is a basic role for an AWS EKS cluster which have 2 policies attached (passed via var.policies
):
resource "aws_iam_role" "k8s_role" {
name = "k8s-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "role-policy-attach" {
role = "${aws_iam_role.k8s_role.name}"
count = "${length(var.policies)}"
policy_arn = "${element(var.policies, count.index)}"
}
This role was wrapped as a module and was passed to the root module.
The error mentioned above in blockquotes occurred because the role already exist while the root module tried to create it.
In Terraform's view, every object is either managed by Terraform or not. Terraform avoids implicitly taking ownership of existing objects because if it were to do that then when you subsequently run terraform destroy
you may end up inadvertently destroying something you didn't intend Terraform to be managing.
In your case, that means that you need to decide whether the role named k8s-role
is managed by Terraform or not, and if you have more than one Terraform configuration you will need to choose exactly one configuration to manage that object.
In your one Terraform configuration that will manage the object, you can use a resource "aws_iam_role"
to specify that. If any other configurations need to access it, or if it will not be managed with Terraform at all, then you can just refer to the role name k8s-role
directly in the situations where it is needed. If you need more information about that role than just its name then you can use the aws_iam_role
data source to fetch that information without declaring that you want to manage the object:
data "aws_iam_role" "k8s" {
name = "k8s-role"
}
For example, if you need to use the ARN of this role then you could access the arn
attribute of this data resource using data.aws_iam_role.k8s.arn
.
Finally, if your role is not currently managed by Terraform but you would like to put it under Terraform's ownership, you can explicitly tell Terraform to start managing that existing object by importing it to create the association between the existing object and your resource
block:
terraform import aws_iam_role.k8s_role k8s-role
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With