We want to have each of our terraform environments in a separate AWS account in a way that will make it hard for accidental deployments to production to occur. How is this best accomplished?
We are assuming that an account is dedicated to Production, another to PreProduction and potentially other sandbox environments also have unique accounts, perhaps on a per-admin basis. One other assumption is that you have an S3 bucket in each AWS account that is specific to your environment. Also, we expect your AWS account credentials to be managed in ~/.aws/credentials (or with an IAM role perhaps).
Terraform Backend Configuration
There are two states. For the primary state we’re using the concept of Partial Configuration. We can’t pass variables into the backend config through modules or other means because it is read before those are determined.
Terraform Config Setup
This means that we declare the backend with some details missing and then provide them as arguments to terraform init
. Once initialized, it is setup until the .terraform
directory is removed.
terraform {
backend "s3" {
encrypt = true
key = "name/function/terraform.tfstate"
}
}
Workflow Considerations
We only need to make a change to how we initialize. We use the -backend-config
arguments on terraform init
. This provides the missing parts of the configuration. I’m providing all of the missing parts through bash aliases in my ~/.bash_profile
like this.
alias terrainit='terraform init \
-backend-config "bucket=s3-state-bucket-name" \
-backend-config "dynamodb_table=table-name" \
-backend-config "region=region-name"'
Accidental Misconfiguration Results
If the appropriate required -backend-config
arguments are left off, initialization will prompt you for them. If one is provided incorrectly, it will likely cause failure for permissions reasons. Also, the remote state must be configured to match or it will also fail. Multiple mistakes in identifying the appropriate account environment must occur in order to deploy to Production.
Terraform Remote State
The next problem is that the remote states also need to change and can’t be configured through pulling configuration from the backend config; however, the remote states can be set through variables.
Module Setup
To ease switching accounts, we’ve setup a really simple module which takes in a single variable aws-account
and returns a bunch of outputs that the remote state can use with appropriate values. We also can include other things that are environment/account specific. The module is a simple main.tf
with map variables that have a key of aws-account
and a value that is specific to that account. Then we have a bunch of outputs that do a simple lookup of the map variable like this.
variable "aws-region" {
description = "aws region for the environment"
type = "map"
default = {
Production = "us-west-2"
PP = "us-east-2"
}
}
output "aws-region" {
description = “The aws region for the account
value = "${lookup(var.aws-region, var.aws-account, "invalid AWS account specified")}"
}
Terraform Config Setup
First, we must pass the aws-account to the module. This will probably be near the top of main.tf
.
module "environment" {
source = "./aws-account"
aws-account = "${var.aws-account}"
}
Then add a variable declaration to your variables.tf
.
variable "aws-account" {
description = "The environment name used to identify appropriate AWS account resources used to configure remote states. Pre-Production should be identified by the string PP. Production should be identified by the string Production. Other values may be added for other accounts later."
}
Now that we have account specific variables output from the module, they can be used in the remote state declarations like this.
data "terraform_remote_state" "vpc" {
backend = "s3"
config {
key = "name/vpc/terraform.tfstate"
region = "${module.environment.aws-region}"
bucket = "${module.environment.s3-state-bucket-name}"
}
}
Workflow Consideration
If the workflow changes in no way after setting up like this, the user will be prompted to provide the value for aws-account
variable through a prompt like this whenever a plan/apply or the like is performed. The contents of the prompt are the description of the variable in variables.tf
.
$ terraform plan
var.aws-account
The environment name used to identify appropriate AWS account
resources used to configure remote states. Pre-Production should be
identified by the string PP. Production should be identified by the
string Production. Other values may be added for other accounts later.
Enter a value:
You can skip the prompt by providing the variable on the command line like this
terraform plan -var="aws-account=PP"
Accidental Misconfiguration Results
If the aws-account
variable isn’t specified, it will be requested. If an invalid value is provided that the aws-account module isn’t aware of, it will return errors including the string “invalid AWS account specified” several times because that is the default values of the lookup. If the aws-account is passed correctly, but it doesn’t match up with the values identified in terraform init, it will fail because the aws credentials being used won’t have access to the S3 bucket being identified.
We faced a similar problema and we solved (partially) creating pipelines in Jenkins or any other CI tool.
We had 3 different envs (dev, staging and prod).Same code, different tfvars, different aws accounts.
When terraform code is merged to master can be applied to staging and only when staging is Green, production can be executed. Nobody runs terraform manually in prod, aws credentials are stored in the CI tool.
This setup can solve an accident like you decribed but also prevents different users applying different local code.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With