I created an EBS volume, attached and mounted it to my Container Instance. In the task definition volumes I set the volume Source Path with the mounted directory. The container data is not beeing created in the mounted directory, all other directories out of the mounted EBS works properly.
The purpose is to save the data out of the container and with this another volume backup it.
Is there a way to use this attached volume with my container? or is a better way to work with volumes and backups.
EDIT: It was tested with a random docker image running it specifying the volume and I faced the same problem. I manage to make it work restarting the Docker service but I'm still looking for a solution without restart Docker.
Inspecting a container with a volume directory that is the mounted EBS
"HostConfig": {
"Binds": [
"/mnt/data:/data"
],
...
"Mounts": [
{
"Source": "/mnt/data",
"Destination": "/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
the directory displays:
$ ls /mnt/data/
lost+found
Inspecting a container with a volume directory that is not the mounted EBS
"HostConfig": {
"Binds": [
"/home/ec2-user/data:/data"
],
...
"Mounts": [
{
"Source": "/home/ec2-user/data",
"Destination": "/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
]
the directory displays:
$ ls /home/ec2-user/data
databases dbms
create an ECS Cluster built on top of 2 EC2 instances. The REX-Ray docker plugin will be installed on both of the instances. create an ECS Task definition for the Postgres database. The task definition will include the Docker volume configuration required to use the REX-Ray volume driver to attach a new EBS volume.
For Amazon ECS clusters that contain Linux instances or Linux containers used with Fargate, Amazon ECS integrates with Amazon EFS to provide container storage. Amazon EBS can only be used with Amazon ECS clusters using container instances.
It sounds like what you potentially want to do is make use of the AWS EC2 Launch Configurations. Using Launch Configurations, you can specify EBS volumes be created and attached to your instance at launch. This happens prior to the docker agent and subsequent tasks being started.
As part of your launch configuration, you'll want to also update the User data under Configure details with something along the lines of:
mkdir /data;
mkfs -t ext4 /dev/xvdb;
mount /dev/xvdb /data;
echo '/dev/xvdb /data ext4 defaults,nofail 0 2' >> /etc/fstab;
Then, so long as your container is setup to access /data
on the host, everything will just work the first go.
Bonus: If you're using ECS clusters, I presume you're already making use of Launch Configurations to get your instances joined to the cluster. If not, you can add new instances automatically as well, using something like:
#!/bin/bash
docker pull amazon/amazon-ecs-agent
docker run --name ecs-agent --detach=true --restart=on-failure:10 --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/var/log/ecs/:/log --volume=/var/lib/ecs/data:/data --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro --volume=/var/run/docker/execdriver/native:/var/lib/docker/execdriver/native:ro --publish=127.0.0.1:51678:51678 --env=ECS_LOGFILE=/log/ecs-agent.log --env=ECS_AVAILABLE_LOGGING_DRIVERS=[\"json-file\",\"syslog\",\"gelf\"] --env=ECS_LOGLEVEL=info --env=ECS_DATADIR=/data --env=ECS_CLUSTER=your-cluster-here amazon/amazon-ecs-agent:latest
Specifically in that bit, you'll want to edit this part: --env=ECS_CLUSTER=your-cluster-here
Hope this helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With