I have some resources whose count is parameterised by a variable. This is used to create VM resources as well as null_resources for e.g. running deployment scripts on them. When I reduce the value of the count from 2 to 1 and apply, I get an error.
Terraform executes plan with no complaints. But when I apply, it tells me there is a cycle:
Error: Cycle: null_resource.network_connection_configuration[7] (destroy), null_resource.network_connection_configuration[8] (destroy), null_resource.network_connection_configuration[3] (destroy), null_resource.network_connection_configuration[4] (destroy), null_resource.network_connection_configuration[0] (destroy), null_resource.network_connection_configuration[6] (destroy), null_resource.network_connection_configuration[1] (destroy), null_resource.network_connection_configuration[9] (destroy), null_resource.network_connection_configuration[2] (destroy), null_resource.network_connection_configuration[10] (destroy), hcloud_server.kafka[2] (destroy), local.all_machine_ips, null_resource.network_connection_configuration (prepare state), null_resource.network_connection_configuration[5] (destroy)
Here is the relevant part of the file:
variable kafka_count {
default = 3
}
resource "hcloud_server" "kafka" {
count = "${var.kafka_count}"
name = "kafka-${count.index}"
image = "ubuntu-18.04"
server_type = "cx21"
}
locals {
all_machine_ips = "${hcloud_server.kafka.*.ipv4_address)}"
}
resource "null_resource" "network_connection_configuration" {
count = "${length(local.all_machine_ips)}"
triggers = {
ips = "${join(",", local.all_machine_ips)}"
}
depends_on = [
"hcloud_server.kafka"
]
connection {
type = "ssh"
user = "deploy"
host = "${element(local.all_machine_ips, count.index)}"
port = 22
}
// ... some file provisioners
}
When I try to find the cycle using the visualisation:
terraform graph -verbose -draw-cycles
There are no cycles visible.
When I use TF_LOG=1 the debug log doesn't show any errors
So the issue is that I can increase the count but not decrease it. I don't want to manally hack the file as it means I won't be able to scale down in future! I'm using Terraform v0.12.1.
Are there any strategies for debugging this situation?
I had a similar issue with 0.12.x - I was calling a provisioner within an aws_instance resource, which was giving the same error you had when increasing the count for the resource.
I got around it by using the self object (self.private_ip) to reference the resource rather than using count.index or element().
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With