I'm using the AWS Two-tier example and I've direct copy-n-pasted the whole thing. terraform apply
works right up to where it tries to SSH into the created EC2 instance. It loops several times giving this output before finally failing.
aws_instance.web (remote-exec): Connecting to remote host via SSH...
aws_instance.web (remote-exec): Host: 54.174.8.144
aws_instance.web (remote-exec): User: ubuntu
aws_instance.web (remote-exec): Password: false
aws_instance.web (remote-exec): Private key: false
aws_instance.web (remote-exec): SSH Agent: true
Ultimately, it fails w/:
Error applying plan:
1 error(s) occurred:
* ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
I've searched around and seen some older posts/issues saying flip agent=false
and I've tried that also w/ no changes or success. I'm skeptical that this example is broky out of the box yet I've done no tailoring or modifications that could have broken it. I'm using terraform 0.6.11 installed via homebrew on OS X 10.10.5.
Additional detail:
resource "aws_instance" "web" {
# The connection block tells our provisioner how to
# communicate with the resource (instance)
connection {
# The default username for our AMI
user = "ubuntu"
# The connection will use the local SSH agent for authentication.
agent = false
}
instance_type = "t1.micro"
# Lookup the correct AMI based on the region
# we specified
ami = "${lookup(var.aws_amis, var.aws_region)}"
# The name of our SSH keypair we created above.
key_name = "${aws_key_pair.auth.id}"
# Our Security group to allow HTTP and SSH access
vpc_security_group_ids = ["${aws_security_group.default.id}"]
# We're going to launch into the same subnet as our ELB. In a production
# environment it's more common to have a separate private subnet for
# backend instances.
subnet_id = "${aws_subnet.default.id}"
# We run a remote provisioner on the instance after creating it.
# In this case, we just install nginx and start it. By default,
# this should be on port 80
provisioner "remote-exec" {
inline = [
"sudo apt-get -y update",
"sudo apt-get -y install nginx",
"sudo service nginx start"
]
}
}
And from the variables tf file:
variable "key_name" {
description = "Desired name of AWS key pair"
default = "test-keypair"
}
variable "key_path" {
description = "key location"
default = "/Users/n8/dev/play/.ssh/terraform.pub"
}
but i can ssh in with this command:
ssh -i ../.ssh/terraform [email protected]
This error occurs if you created a password for your key file, but haven't manually entered the password. To resolve this error, enter the password or use ssh-agent to load the key automatically. There are a number of reasons why you might get an SSH error, like Resource temporarily unavailable.
You have two possibilities:
Add your key to your ssh-agent
:
ssh-add ../.ssh/terraform
and use agent = true
in your configuration. The case should work for you
Modify your configuration to use the key directly with
secret_key = "../.ssh/terraform"
or so. Please consult the documentation for more specific syntax.
I had the same issue and I did following configurations
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("*.pem")}"
timeout = "2m"
agent = false
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With