I'm using docker-compose for deployment, with a 2 docker-compose.yml
setup where I'm building the image locally and pulling from docker hub on the server.
Besides building vs pulling an image, the volumes config is the same.
Locally:
app:
build: .
volumes:
- "/data/volume:/volume"
And on the server:
app:
image: username/repo:tag
volumes:
- "/data/volume:/volume"
In my Dockerfile:
volume /volume
Locally my volume mounts to the specified directory fine, with files created by the app persisted there outside the container. On the deployment server, however, this does not happen.
Files are however created and persisted through deploys, even though my deployment script runs docker-compose down -v
which presumably removes named & anonymous volumes on the container.
I'm sure I'm doing something wrong, but I can't see what. Could it be a caching issue? The volume configuration was not the same on initial deploy.
I actually can't seem to force the images to be lost between deploys. I ran:
docker-compose down -v --rmi all --remove-orphans
docker rm $(docker ps -a -q)
docker rmi $(docker images -q)
docker volume rm $(docker volume ls -q)
... which I thought would leave me with a clean slate for redeploying, then:
docker pull username/repo:tag
docker-compose build --no-cache --force-rm
docker-compose up -d
... and the files which are supposed to be in the mounted volume are still there, and there's still nothing in the mounted dir on the disk. Any ideas?
Running docker inspect <container>
on the server yields a mount configuration like this:
"Mounts": [
"Source": "/data/volume",
"Destination": "/volume",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
]
I notice there's no Driver specified, and not sure about the significance of "rprivate", but the Source and Destination do appear to be correct.
You can mount host volumes by using the -v flag and specifying the name of the host directory. Everything within the host directory is then available in the container. What's more, all the data generated inside the container and placed in the data volume is safely stored on the host directory.
Cloning From An Existing Container But, if you do need to add a volume to a running container, you can use docker commit to make a new image based on that container, and then clone it with the new volume. Then, you can run the new image, replacing the old image with the cloned one.
Getting started using bind mounts The most notable difference between the two options is that --mount is more verbose and explicit, whereas -v is more of a shorthand for --mount . It combines all the options you pass to --mount into one field.
The problem was that I'd mounted a EBS volume to /volume
after the Docker service had been started.
The directory was mounted in the container, which is why the docker inspect
looks correct, but it mounted the pre-existing mount point which was overlayed by the host's own mount.
This mount happened after the Docker service was started, but long before any containers were actually started up, so it didn't occur to me that Docker might not be respecting the filesystem change which had happened earlier.
The solution was just to restart the Docker service.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With