Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Mount S3 bucket as filesystem on AWS ECS container

I am trying to mount S3 as a volume on AWS ECS docker container using rexray/s3fs driver.

I am able to do this on my local machine, where I installed plugin

$docker plugin install rexray/s3fs

and mounted S3 bucket on docker container.

$docker plugin ls

ID                  NAME                 DESCRIPTION                                   ENABLED

3a0e14cadc17        rexray/s3fs:latest   REX-Ray FUSE Driver for Amazon Simple Storage   true 

$docker run -ti --volume-driver=rexray/s3fs -v s3-bucket:/data img

I am trying replicate this on AWS ECS.

Tried follow below document: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-volumes.html

If I give Driver value then task is not able to run and giving "was unable to place a task because no container instance met all of its requirements." error.

I am using t2.medium instance and giving of it requirement for task so it should not be H/W requirement issue.

If I remove the Driver config from Job definition task gets executed.

It seems I am miss configuring something.

Is anyone trying/tried same thing, please share the knowledge.

Thanks!!

like image 447
Pratik Mungekar Avatar asked Aug 27 '18 14:08

Pratik Mungekar


People also ask

Can you mount S3 as a file system?

A S3 bucket can be mounted in a AWS instance as a file system known as S3fs. S3fs is a FUSE file-system that allows you to mount an Amazon S3 bucket as a local file-system. It behaves like a network attached drive, as it does not store anything on the Amazon EC2, but user can access the data on S3 from EC2 instance.

How do I mount an Amazon S3 bucket as a S3FS drive?

If you want to configure automatic mount of an S3 bucket with S3FS on your Linux machine, you have to create the passwd-s3fs file in /etc/passwd-s3fs, which is the standard location. After creating this file, you don't need to use the -o passwd_file key to set the location of the file with your AWS keys manually.

How to mount a S3 bucket in an AWS instance?

A S3 bucket can be mounted in a AWS instance as a file system known as S3fs. S3fs is a FUSE file-system that allows you to mount an Amazon S3 bucket as a local file-system. It behaves like a network attached drive, as it does not store anything on the Amazon EC2, but user can access the data on S3 from EC2 instance.

How do I Mount Amazon S3 as a file system in EC2?

We have a new article in our Developer Connection! In Mounting Amazon S3 as a File System in Amazon EC2, AWS developer Bill Donahue shows how to use the S3InfiniDisk product to mount Amazon S3 as a file system in Amazon EC2. Using S3InfiniDisk, data written to a mounted Linux file system is written to S3, giving EC2 a permanent storage system.

How to create an object storage bucket in AWS cloud storage?

Create a new file /etc/passwd-s3fs and paste your access key and secret key in the below format Now create a directory and mount your Object Storage Bucket in it. your_bucketname is the name of your Object Storage Bucket that you have created on your Cloud Account. use_cache to use a directory for its cache purpose.

Is it possible to Mount Amazon S3 in a container?

Although things for mounting S3 exists, none of them are good. S3 is not an actual filesystem, its a flat object database so trying to use it as a local filesystem is subpar at best. You can interact with s3 via the API from your containers, or you can mount efs volumes: docs.aws.amazon.com/AmazonECS/latest/developerguide/…


2 Answers

I have gotten s3fs to work in my ECS containers by just running the s3fs command directly to mount the bucket in my container. I'm not familiar with the rexray driver, it may provide some benefits over just using s3fs, but for a lot of use cases this might work well and does not require any UserData editing.

I made it a little smoother by setting my container's entrypoint to be the following:

#!/bin/bash

bucket=my-bucket

s3fs ${bucket} /data -o ecs

echo "Mounted ${bucket} to /data"

exec "$@"

The -o ecs option is critical for assuming the ECS Task Role, if you use the regular -o iam_role=auto s3fs will assume the IAM role of the EC2 instance running the ECS agent.

Note the ECS Task Role will need to be provided with the s3:GetObject, s3:PutObject, and s3:ListObjects IAM action permissions for the bucket you are trying to mount. If you want the container to have read-only access to the bucket you can enforce that at the IAM level by leaving off the s3:PutObject permission. You can also use fine grained IAM resource statements to disallow or allow writes to only certain s3 prefixes. Some ugly errors will be thrown if you try to write a file to the s3fs filesystem and it does not have permission to actually make the underlying s3 api calls, but it all generally works fine.

Note: The version of s3fs installed by apt-get install s3fs is old and does not have this option available as of the time of this writing, which means you may need to install s3fs from source.

Also note: you will need to run your containers in privileged mode for the s3fs mount to work.

like image 149
qwwqwwq Avatar answered Sep 18 '22 23:09

qwwqwwq


Your approach of using the rexray/s3fs driver is correct.

These are the steps I followed to get things working on Amazon Linux 1.

First you will need to install s3fs.

yum install -y gcc libstdc+-devel gcc-c+ fuse fuse-devel curl-devel libxml2-devel mailcap automake openssl-devel git gcc-c++
git clone https://github.com/s3fs-fuse/s3fs-fuse
cd s3fs-fuse/
./autogen.sh
./configure --prefix=/usr --with-openssl
make
make install

Now install the driver. There are some options here you might want to modify such as using an IAM role instead of Access Key and AWS region.

docker plugin install rexray/s3fs:latest S3FS_REGION=ap-southeast-2 S3FS_OPTIONS="allow_other,iam_role=auto,umask=000" LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_MOUNT_ROOTPATH=/ --grant-all-permissions

Now the very important step of restarting the ECS agent. I also update for good measure.

yum update -y ecs-init
service docker restart && start ecs

You should now be ready to create your task definition. The important part is your volume configuration which is shown below.

"volumes": [
  {
    "name": "name-of-your-s3-bucket",
    "host": null,
    "dockerVolumeConfiguration": {
      "autoprovision": false,
      "labels": null,
      "scope": "shared",
      "driver": "rexray/s3fs",
      "driverOpts": null
    }
  }
]

Now you just need to specify the mount point in the container definition:

"mountPoints": [
  {
    "readOnly": null,
    "containerPath": "/where/ever/you/want",
    "sourceVolume": "name-of-your-s3-bucket"
  }
]

Now as long as you have appropriate IAM permissions for accessing the s3 bucket your container should start and you can get on with using s3 as a volume.

If you get an error running the task that says "ATTRIBUTE" double check that the plugin has been successfully installed on the ec2 instance and the ecs agent has been restarted. Also double check your driver name is "rexray/s3fs".

like image 24
wimnat Avatar answered Sep 17 '22 23:09

wimnat