Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Capture Terraform provisioner output?

Tags:

terraform

Use Case

Trying to provision a (Docker Swarm or Consul) cluster where initializing the cluster first occurs on one node, which generates some token, which then needs to be used by other nodes joining the cluster. Key thing being that nodes 1 and 2 shouldn't attempt to join the cluster until the join key has been generated by node 0.

Eg. on node 0, running docker swarm init ... will return a join token. Then on nodes 1 and 2, you'd need to pass that token to the same command, like docker swarm init ${JOIN_TOKEN} ${NODE_0_IP_ADDRESS}:{SOME_PORT}. And magic, you've got a neat little cluster...

Attempts So Far

  • Tried initializing all nodes with the AWS SDK installed, and storing the join key from node 0 on S3, then fetching that join key on other nodes. This is done via a null_resource with 'remote-exec' provisioners. Due to the way Terraform executes things in parallel, there are racy type conditions and predictably nodes 1 and 2 frequently attempt to fetch a key from S3 thats not there yet (eg. node 0 hasn't finished its stuff yet).

  • Tried using the 'local-exec' provisioner to SSH into node 0 and capture its join key output. This hasn't worked well or I sucked at doing it.


I've read the docs. And stack overflow. And Github issues, like this really long outstanding one. Thoroughly. If this has been solved elsewhere though, links appreciated!


PS - this is directly related to and is a smaller subset of this question, but wanted to re-ask it in order to focus the scope of the problem.

like image 672
jiveTurkey Avatar asked Jun 12 '17 23:06

jiveTurkey


2 Answers

You can redirect the outputs to a file:

resource "null_resource" "shell" {

  provisioner "local-exec" {
    command = "uptime 2>stderr >stdout; echo $? >exitstatus"
  }
}

and then read the stdout, stderr and exitstatus files with local_file

The problem is that if the files disappear, then terraform apply will fail.

In terraform 0.11 I made a workaround by reading the file with external data source and storing the results in a null_resource triggers (!)

resource "null_resource" "contents" {
  triggers = {
    stdout     = "${data.external.read.result["stdout"]}"
    stderr     = "${data.external.read.result["stderr"]}"
    exitstatus = "${data.external.read.result["exitstatus"]}"
  }

  lifecycle {
    ignore_changes = [
      "triggers",
    ]
  }
}

But in 0.12 this can be replaced with file()

and then finally I can use / output those with:

output "stdout" {
  value = "${chomp(null_resource.contents.triggers["stdout"])}"
}

See the module https://github.com/matti/terraform-shell-resource for full implementation

like image 168
matti Avatar answered Oct 21 '22 15:10

matti


You can use external data:

data "external" "docker_token" {
  program = ["/bin/bash", "-c" "echo \"{\\\"token\\\":\\\"$(docker swarm init...)\\\"}\""]
}

Then the token will be available as data.external.docker_token.result.token. If you need to pass arguments in, you can use a script (e.g. relative to path.module). See https://www.terraform.io/docs/providers/external/data_source.html for details.

like image 25
shaunc Avatar answered Oct 21 '22 17:10

shaunc