Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to manage terraform for multiple repos

Tags:

terraform

I have 2 repos for my project. A Static website and server. I want the website to be hosted by cloudfront and s3 and the server on elasticbeanstalk. I know these resources will need to know about a route53 resource at least to be under the same domain name for cors to work. Among other things such as vpcs and stuff.

So my question is how do I manage terraform with multiple repos. I'm thinking I could have a seperate infrastructure repo that builds for all repos. I could also have them seperate and pass in the arns/names/ids as variables (annoying).

like image 281
Caleb Macdonald Black Avatar asked Jan 09 '18 11:01

Caleb Macdonald Black


2 Answers

You can use terraform remote_state for this. It lets you read the output variables from another terraform state file.

Lets assume you save your state files remotely on s3 and you have your website.tfstate and server.tfstate file. You could output your hosted zone ID of your route53 zone as hosted_zone_id in your website.tfstate and then reference that output variable directly in your server state terraform code.

data "terraform_remote_state" "website" {
  backend = "s3"

  config {
    bucket = "<website_state_bucket>"
    region = "<website_bucket_region>"
    key = "website.tfstate"
  }
}

resource "aws_route53_record" "www" {
  zone_id = "${data.terraform_remote_state.website.hosted_zone_id}"
  name    = "www.example.com"
  type    = "A"
  ttl     = "300"
  records = ["${aws_eip.lb.public_ip}"]
}

Note, that you can only read output variables from remote states. You cannot access resources directly, as terraform treats other states/modules as black boxes.

Update

As mentioned in the comments, terraform_remote_state is a simple way to share explicitly published variables across multiple states. However, it comes with 2 issues:

  1. Close coupling between code components, i.e., producer of the variable cannot change easily.
  2. It can only be used by terraform, i.e., you cannot easily share those variables across different layers. Configuration tools such as Ansible cannot use .tfstate natively without some additional custom plugin/wrapper.

The recommended HashiCorp way is to use a central config store such as Consul. It comes with more benefits:

  1. Consumer is decoupled from the variable producer.
  2. Explicit publishing of variables (like in terraform_remote_state).
  3. Can be used by other tools.

A more detailed explanation can be found here.

like image 130
fishi0x01 Avatar answered Sep 17 '22 12:09

fishi0x01


An approach I've used in the past is to have a single repo for all of the Infrastructure.

An alternative is to have 2 separate tf configurations, each using remote state. Config 1 can use output variables to store any arns/ids as necessary.

Config 2 can then have a remote_state data source to query for the relevant arns/ids.

E.g.

# Declare remote state
data "terraform_remote_state" "network" {
  backend = "s3"
  config {
    bucket = "my-terraform-state"
    key    = "network/terraform.tfstate"
    region = "us-east-1"
  }
}

You can then use output values using standard interpolation syntax ${data.terraform_remote_state.network.some_id}

like image 33
Fermin Avatar answered Sep 20 '22 12:09

Fermin