I've deployed an ELK stack to AWS ECS with terraform. All was running nicely for a few weeks, but 2 days ago I had to restart the instance.
Sadly, the new instance did not rely on the existing volume to mount the root block device. So all my elasticsearch data are no longer available to my Kibana instance.
Datas are still here, on previous volume, currently not used.
So I tried many things to get this volume attached at "dev/xvda" but without for exemple:
I am using an aws_autoscaling_group with an aws_launch_configuration.
resource "aws_launch_configuration" "XXX" {
name = "XXX"
image_id = data.aws_ami.latest_ecs.id
instance_type = var.INSTANCE_TYPE
security_groups = [var.SECURITY_GROUP_ID]
associate_public_ip_address = true
iam_instance_profile = "XXXXXX"
spot_price = "0.04"
lifecycle {
create_before_destroy = true
}
user_data = templatefile("${path.module}/ecs_agent_conf_options.tmpl",
{
cluster_name = aws_ecs_cluster.XXX.name
}
)
//The volume i want to reuse was created with this configuration. I though it would
//be enough to reuse the same volume. It doesn't.
root_block_device {
delete_on_termination = false
volume_size = 50
volume_type = "gp2"
}
}
resource "aws_autoscaling_group" "YYY" {
name = "YYY"
min_size = var.MIN_INSTANCES
max_size = var.MAX_INSTANCES
desired_capacity = var.DESIRED_CAPACITY
health_check_type = "EC2"
availability_zones = ["eu-west-3b"]
launch_configuration = aws_launch_configuration.XXX.name
vpc_zone_identifier = [
var.SUBNET_1_ID,
var.SUBNET_2_ID]
}
Do I miss something obvious about this?
The first step is to create an IAM policy that allows the following actions: Start an EC2 instance, Stop an EC2 instance, and list EC2 instances. This policy can be created with the below terraform resource definition.
Terraform, An outstanding and innovative product from hashicorp and it is a leader in Infrastructure as Code tools Segment. Before I go any further, I think I should set the context. In this article, we are going to learn how to use Terraform to create AWS EC2 instance and create a Terraform AWS infrastructure.
Terraform and AWS go hand in hand and terraform has a lot of resources and configurations that support the entire AWS Infrastructure management tasks like AWS EC2 instance creation, Security Group creation, Virtual Private Cloud (VPC) Setup, Serverless set up, etc. So let us go and do some farming in the AWS planet. In order to connect to AWS.
It's entirely possible that made the wrong changes, but terraform and its internal tests seemed happy, and both the output of running terraform apply and terraform show seemed to show the block devices in written order. However, in checking the BlockDeviceMappings section from the AWS API.
@davivcgarcia in some cases you might need to reference non-root device names by /dev/xvd_ instead of /dev/sd_, e.g. /dev/xvdb instead of /dev/sdb. It depends on the AMI. The ordering of the ebs_block_device configurations in the Terraform configuration does not determine any sort of ordering with the instance disks.
Sadly, you cannot attach a volume as a root volume to an instance.
What you have to do is create a custom AMI based on your volume. This involves creating a snapshot of the volume followed by construction of the AMI:
In terraform, there is aws_ami specially for that purpose.
The following terraform script exemplifies the process in three steps:
provider "aws" {
# your data
}
resource "aws_ebs_snapshot" "snapshot" {
volume_id = "vol-0ff4363a40eb3357c" # <-- your EBS volume ID
}
resource "aws_ami" "my" {
name = "my-custom-ami"
virtualization_type = "hvm"
root_device_name = "/dev/xvda"
ebs_block_device {
device_name = "/dev/xvda"
snapshot_id = aws_ebs_snapshot.snapshot.id
volume_type = "gp2"
}
}
resource "aws_instance" "web" {
ami = aws_ami.my.id
instance_type = "t2.micro"
# key_name = "<your-key-name>"
tags = {
Name = "InstanceFromCustomAMI"
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With