I want use EFS with fargate but I have this error when the task start:
ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: Failed to resolve "fs-xxxxx.efs.eu-west-1.amazonaws.com" - check that your file system ID is correct
I have checked the file system ID, it is corrects...how can I have more info about this error? Could it be related to the security groups?
This is the code that I use with terraform, I use two mount points for the two availability zones:
resource "aws_efs_file_system" "efs_apache" {
}
resource "aws_efs_mount_target" "efs-mount" {
count = 2
file_system_id = aws_efs_file_system.efs_apache.id
subnet_id = sort(var.subnet_ids)[count.index]
security_groups = [aws_security_group.efs.id]
}
resource "aws_efs_access_point" "efs-access-point" {
file_system_id = aws_efs_file_system.efs_apache.id
}
resource "aws_security_group" "efs" {
name = "${var.name}-efs-sg"
description = "Allow traffic from self"
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 2049
to_port = 2049
protocol = "tcp"
security_groups = [aws_security_group.fargate_sg.id]
}
}
this is the fargate service:
resource "aws_ecs_task_definition" "task_definition" {
family = var.name
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
execution_role_arn = aws_iam_role.task_execution_role.arn
task_role_arn = aws_iam_role.task_role.arn
cpu = var.cpu
memory = var.memoryHardLimit
volume {
name = "efs-apache"
efs_volume_configuration {
file_system_id = aws_efs_file_system.efs_apache.id
root_directory = "/"
transit_encryption = "ENABLED"
authorization_config {
access_point_id = aws_efs_access_point.efs-access-point.id
iam = "ENABLED"
}
}
}
depends_on = [aws_efs_file_system.efs_apache]
container_definitions = <<EOF
[
{
"name": "${var.name}",
"image": "${data.aws_caller_identity.current.account_id}.dkr.ecr.${data.aws_region.current.name}.amazonaws.com/${lower(var.project_name)}_app:latest",
"memory": ${var.memoryHardLimit},
"memoryReservation": ${var.memorySoftLimit},
"cpu": ${var.cpu},
"essential": true,
"command": [
"/bin/sh -c \"/app/start.sh"
],
"entryPoint": [
"sh",
"-c"
],
"mountPoints": [
{
"containerPath": "/var/www/sites_json",
"sourceVolume": "efs-apache",
"readOnly": false
}
],
"portMappings": [
{
"containerPort": ${var.docker_container_port},
"hostPort": ${var.docker_container_port}
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${var.name}-Task-LogGroup",
"awslogs-region": "${data.aws_region.current.name}",
"awslogs-stream-prefix": "ecs"
}
}
}
]
EOF
}
How can I solve?
To mount an Amazon EFS file system on a Fargate task or container, you must create a task definition, and then make that task definition available to the containers in your task across all Availability Zones in your AWS Region.
You can use Amazon EFS file systems with Amazon ECS to access file system data across your fleet of Amazon ECS tasks. That way, your tasks have access to the same persistent storage, no matter the infrastructure or container instance on which they land.
Make sure you have enabled DNS Resolution and DNS hostnames in your VPC. EFS needs both these options enabled to work since it relies on the DNS hostname to resolve the connection. This had me stuck for a while since most documentation on the internet focuses on the security groups for this error.
The terraform AWS provider resource aws_vpc
sets enable_dns_hostnames = false
by default, so you'll need to explicitly set it to true. Your terraform VPC config should look something like this:
resource "aws_vpc" "main" {
cidr_block = "10.255.248.0/22"
enable_dns_hostnames = true
}
I've spent a few hours investigating the saùe issue and the problem was that EFS was not mounted on subnets (aws_efs_mount_target
missing in Terraform script)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With