What is your best practise for mounting an S3 container inside of a docker host? Is there a way to do this transparently? Or do I rather need to mount volume to the host drive using the VOLUME directive, and then backup files to S3 with CRON manually?
A S3 bucket can be mounted in a AWS instance as a file system known as S3fs. S3fs is a FUSE file-system that allows you to mount an Amazon S3 bucket as a local file-system. It behaves like a network attached drive, as it does not store anything on the Amazon EC2, but user can access the data on S3 from EC2 instance.
For registry storage, we can use filesystem, s3, azure, swift etc. For the complete list of options please visit docker site site. We need to store the docker images pushed to the registry. We will use S3 to store these docker images.
Docker streamlines the development lifecycle by allowing developers to work in standardized environments using local containers which provide your applications and services. Containers are great for continuous integration and continuous delivery (CI/CD) workflows.
There is different aproaches depending on what you want to acomplish, but here is how I did it using s3fs-fuse
I created a docker image based on ubuntu and also
Dockerfile
FROM ubuntu:18.04
## Some utilities
RUN apt-get update -y && \
apt-get install -y build-essential libfuse-dev libcurl4-openssl-dev libxml2-dev pkg-config libssl-dev mime-support automake libtool wget tar git unzip
RUN apt-get install lsb-release -y && apt-get install zip -y && apt-get install vim -y
## Install AWS CLI
RUN apt-get update && \
apt-get install -y \
python3 \
python3-pip \
python3-setuptools \
groff \
less \
&& pip3 install --upgrade pip \
&& apt-get clean
RUN pip3 --no-cache-dir install --upgrade awscli
## Install S3 Fuse
RUN rm -rf /usr/src/s3fs-fuse
RUN git clone https://github.com/s3fs-fuse/s3fs-fuse/ /usr/src/s3fs-fuse
WORKDIR /usr/src/s3fs-fuse
RUN ./autogen.sh && ./configure && make && make install
## Create folder
WORKDIR /var/www
RUN mkdir s3
## Set Your AWS Access credentials
ENV AWS_ACCESS_KEY=YOURAWSACCESSKEY
ENV AWS_SECRET_ACCESS_KEY=YOURAWSSECRETACCESSKEY
## Set the directory where you want to mount your s3 bucket
ENV S3_MOUNT_DIRECTORY=/var/www/s3
## Replace with your s3 bucket name
ENV S3_BUCKET_NAME=your-s3-bucket-name
## S3fs-fuse credential config
RUN echo $AWS_ACCESS_KEY:$AWS_SECRET_ACCESS_KEY > /root/.passwd-s3fs && \
chmod 600 /root/.passwd-s3fs
## change workdir to /
WORKDIR /
## Entry Point
ADD start-script.sh /start-script.sh
RUN chmod 755 /start-script.sh
CMD ["/start-script.sh"]
and the start script specified should be :
start-script.sh
#!/bin/bash
s3fs $S3_BUCKET_NAME $S3_MOUNT_DIRECTORY
Then build your image and if you create a file into the directory that you specified it should also be reflected on the s3 console and viceversa.
I have a more detailed explanation here with a working example: https://github.com/skypeter1/docker-s3-bucket
There doesn't seem to out-of-box support of Amazon S3 in the popular container storage solutions like Flocker and EMC REX-Ray. However if you're open to storing your data on Amazon EBS volumes, then EMC REX-Ray allows you to create, mount and take snapshots of your volumes.
Of course, the approach you suggested works perfectly as well. You can install the AWS CLI on the host running your containers and write a simple cron job that copies the data in the host directory mapped to your container volume to your S3 bucket.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With