I am using the new Elastic File System provided by amazon, on my single container EB deploy. I can't figure out why the mounted EFS cannot be mapped into the container.
The EFS mount is successfully performed on the host at /efs-mount-point.
Provided to the Dockerrun.aws.json is
{
"AWSEBDockerrunVersion": "1"
"Volumes": [
{
"HostDirectory": "/efs-mount-point",
"ContainerDirectory": "/efs-mount-point"
}
]
}
The volume is then created in the container once it starts running. However it has mapped the hosts directory /efs-mount-point, not the actual EFS mount point. I can't figure out how to get Docker to map in the EFS volume mounted at /efs-mount-point instead of the host's directory.
Do NFS volumes play nice with Docker?
Amazon EFS provides scalable file storage for use with Amazon EC2. You can use an EFS file system as a common data source for workloads and applications running on multiple instances. For more information, see the Amazon Elastic File System product page .
AWS and Docker have collaborated to make a simplified developer experience that enables you to deploy and manage containers on Amazon ECS directly using Docker tools. You can now build and test your containers locally using Docker Desktop and Docker Compose, and then deploy them to Amazon ECS on Fargate.
AWS has instructions to automatically create and mount an EFS on elastic beanstalk. They can be found here
These instructions link to two config files to be customized and placed in .ebextensions folder of your deployment package.
The file storage-efs-mountfilesystem.config needs to be further modified to work with Docker containers. Add the following command:
02_restart:
command: "service docker restart"
And for multi-container environments Elastic Container Service has to be restarted as well (it was killed when docker was restarted above):
03_start_eb:
command: |
start ecs
start eb-docker-events
sleep 120
test: sh -c "[ -f /etc/init/ecs.conf ]"
so the complete commands section of storage-efs-mountfilesystem.config is:
commands:
01_mount:
command: "/tmp/mount-efs.sh"
02_restart:
command: "service docker restart"
03_start_eb:
command: |
start ecs
start eb-docker-events
sleep 120
test: sh -c "[ -f /etc/init/ecs.conf ]"
The reason this does not work "out-of-the-box" is because the docker daemon is started by the EC2 instance before the commands in .ebextensions are run. The startup order is:
At step one, the filesystem view the docker daemon provides to the containers is fixed. Therefore changes in the host filesystems made during step 3 are not reflected in the container's view.
One strange effect is that the container sees a mount point prior to the filesystem being mounted on the host. The host sees the mounted filesystem. Therefore a file written by a container will be written to the host directory under the mounted directory, not the mounted filesystem. Unmounting the filesystem on the EC2 host will expose the container files written into the mount directory.
You need to restart docker after mounting the EFS volume in the host EC2 instance.
Here's an example, .ebextensions/efs.config
:
commands:
01mkdir:
command: "mkdir -p /efs-mount-point"
02mount:
command: "mountpoint -q /efs-mount-point || mount -t nfs4 -o nfsvers=4.1 $(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone).fs-fa35c253.efs.us-west-2.amazonaws.com:/ /efs-mount-point"
03restart:
command: "service docker restart"
EFS with AWS Beanstalk - Multicontainer Docker will work. But numerous of things will stop working because you have to restart docker after you mount the EFS.
Searching around you might find that you need to do "docker restart" after mounting EFS. It's not that simple. You will experience troubles when autoscaling happens and / or when deploying new version of the app.
Below is a script I use for mounting a EFS to the docker instance, where the following steps is needed:
Here is my script:
.ebextensions/commands.config
:
commands:
01stopdocker:
command: "sudo stop ecs > /dev/null 2>&1 || /bin/true && sudo service docker stop"
02killallnetworkbindings:
command: 'sudo killall docker > /dev/null 2>&1 || /bin/true'
03removenetworkinterface:
command: "rm -f /var/lib/docker/network/files/local-kv.db"
test: test -f /var/lib/docker/network/files/local-kv.db
# Mount the EFS created in .ebextensions/media.config
04mount:
command: "/tmp/mount-efs.sh"
# On new instances, delay needs to be added because of 00task enact script. It tests for start/ but it can be various states of start...
# Basically, "start ecs" takes some time to run, and it runs async - so we sleep for some time.
# So basically let the ECS manager take it's time to boot before going on to enact scritps and post deploy scripts.
09restart:
command: "service docker start && sudo start ecs && sleep 120s"
.ebextensions/mount-config.config
# efs-mount.config
# Copy this file to the .ebextensions folder in the root of your app source folder
option_settings:
aws:elasticbeanstalk:application:environment:
EFS_REGION: '`{"Ref": "AWS::Region"}`'
# Replace with the required mount directory
EFS_MOUNT_DIR: '/efs_volume'
# Use in conjunction with efs_volume.config or replace with EFS volume ID of an existing EFS volume
EFS_VOLUME_ID: '`{"Ref" : "FileSystem"}`'
packages:
yum:
nfs-utils: []
files:
"/tmp/mount-efs.sh":
mode: "000755"
content : |
#!/bin/bash
EFS_REGION=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_REGION')
EFS_MOUNT_DIR=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_MOUNT_DIR')
EFS_VOLUME_ID=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_VOLUME_ID')
echo "Mounting EFS filesystem ${EFS_DNS_NAME} to directory ${EFS_MOUNT_DIR} ..."
echo 'Stopping NFS ID Mapper...'
service rpcidmapd status &> /dev/null
if [ $? -ne 0 ] ; then
echo 'rpc.idmapd is already stopped!'
else
service rpcidmapd stop
if [ $? -ne 0 ] ; then
echo 'ERROR: Failed to stop NFS ID Mapper!'
exit 1
fi
fi
echo 'Checking if EFS mount directory exists...'
if [ ! -d ${EFS_MOUNT_DIR} ]; then
echo "Creating directory ${EFS_MOUNT_DIR} ..."
mkdir -p ${EFS_MOUNT_DIR}
if [ $? -ne 0 ]; then
echo 'ERROR: Directory creation failed!'
exit 1
fi
chmod 777 ${EFS_MOUNT_DIR}
if [ $? -ne 0 ]; then
echo 'ERROR: Permission update failed!'
exit 1
fi
else
echo "Directory ${EFS_MOUNT_DIR} already exists!"
fi
mountpoint -q ${EFS_MOUNT_DIR}
if [ $? -ne 0 ]; then
AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)
echo "mount -t nfs4 -o nfsvers=4.1 ${AZ}.${EFS_VOLUME_ID}.efs.${EFS_REGION}.amazonaws.com:/ ${EFS_MOUNT_DIR}"
mount -t nfs4 -o nfsvers=4.1 ${AZ}.${EFS_VOLUME_ID}.efs.${EFS_REGION}.amazonaws.com:/ ${EFS_MOUNT_DIR}
if [ $? -ne 0 ] ; then
echo 'ERROR: Mount command failed!'
exit 1
fi
else
echo "Directory ${EFS_MOUNT_DIR} is already a valid mountpoint!"
fi
echo 'EFS mount complete.'
You will have to change the option_settings below. To find the VPC and subnets which you must define under option_settings below, look in AWS web console -> VPC, there you must find the Default VPC id and the 3 default subnet ids. If your beanstalk uses custom VPC you must use these settings.
.ebextensions/efs-volume.config
:
# efs-volume.config
# Copy this file to the .ebextensions folder in the root of your app source folder
option_settings:
aws:elasticbeanstalk:customoption:
EFSVolumeName: "EB-EFS-Volume"
VPCId: "vpc-xxxxxxxx"
SubnetUSWest2a: "subnet-xxxxxxxx"
SubnetUSWest2b: "subnet-xxxxxxxx"
SubnetUSWest2c: "subnet-xxxxxxxx"
Resources:
FileSystem:
Type: AWS::EFS::FileSystem
Properties:
FileSystemTags:
- Key: Name
Value:
Fn::GetOptionSetting: {OptionName: EFSVolumeName, DefaultValue: "EB_EFS_Volume"}
MountTargetSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Security group for mount target
SecurityGroupIngress:
- FromPort: '2049'
IpProtocol: tcp
SourceSecurityGroupId:
Fn::GetAtt: [AWSEBSecurityGroup, GroupId]
ToPort: '2049'
VpcId:
Fn::GetOptionSetting: {OptionName: VPCId}
MountTargetUSWest2a:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: {Ref: FileSystem}
SecurityGroups:
- {Ref: MountTargetSecurityGroup}
SubnetId:
Fn::GetOptionSetting: {OptionName: SubnetUSWest2a}
MountTargetUSWest2b:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: {Ref: FileSystem}
SecurityGroups:
- {Ref: MountTargetSecurityGroup}
SubnetId:
Fn::GetOptionSetting: {OptionName: SubnetUSWest2b}
MountTargetUSWest2c:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: {Ref: FileSystem}
SecurityGroups:
- {Ref: MountTargetSecurityGroup}
SubnetId:
Fn::GetOptionSetting: {OptionName: SubnetUSWest2c}
Resources:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With