I'm trying to share data between two Docker containers that are running in a multicontainer AWS EC2 instance.
Normally, I would specify the volume as a command flag when I ran the container, ie: docker run -p 80:80 -p 443:443 --link Widget:Widget --volumes-from Widget --name Nginx1 -d nginx1
to share a volume from Widget to Nginx1.
However, since Elastic Beanstalk requires you to specify your Docker configuration in a dockerrun.aws.json
file, and then handles running your docker containers internally, I haven't been able to figure out how to share data volumes between containers.
Note that I'm not trying to share data from the EC2 instance into a Docker container -- this part seems to work fine; rather, I would like to share data directly from one Docker container to another. I know that docker container volumes are shared with the host at "/var/lib/docker/volumes/fac362...80535"
etc., but since this location is not static I don't know how I would reference it in the dockerrun.aws.json
file.
Has anyone found a solution or a workaround?
More info on dockerrun.aws.json
and the config EB is looking for here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html
Thanks!
Multiple containers can run with the same volume when they need access to shared data. Docker creates a local volume by default. However, we can use a volume diver to share data across multiple machines. Finally, Docker also has –volumes-from to link volumes between running containers.
If you are running more than one container, you can let your containers communicate with each other by attaching them to the same network. Docker creates virtual networks which let your containers talk to each other. In a network, a container has an IP address, and optionally a hostname.
To accomplish what you want, you need to use the volumesFrom
parameter correctly. You need to make sure to expose the volume with a VOLUME
command for the container sharing its internal data.
Here's an example Dockerfile which I used to bundle some static files for serving via a webserver:
FROM tianon/true
COPY build/ /opt/static
VOLUME ["/opt/static"]
Now the relevant parts of the Dockerrun.aws.json
:
{
"name": "staticfiles",
"image": "mystaticcontainer",
"essential": false,
"memory": "16"
},
{
"name": "webserver,
...
"volumesFrom" : [
{
"sourceContainer": "staticfiles"
}
]
}
Note that you don't need any volumes
entry in the root of the Dockerrun.aws.json file, since the volume is only shared between the two containers, and not persisted on the host. You also don't need any specific mountPoints
key in the container definition holding the volume to be shared, as the container with volumesFrom
automatically picks up all the volumes from the referred container. In this example, all the files in /opt/static
in the staticfiles
container will also be available to the webserver
container at the same location.
From the AWS docs I found this:
You can define one or more volumes on a container, and then use the volumesFrom parameter in a different container definition (within the same task) to mount all of the volumes from the sourceContainer at their originally defined mount points.
The volumesFrom parameter applies to volumes defined in the task definition, and those that are built into the image with a Dockerfile.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With