I'd like to share ~/mydir
directory with host, but not replace Docker container directory with host files.
So, I have docker-compose.yml
version: '2'
services:
app:
container_name: mono
build: .
volumes:
# save .composer files on host to keep cache warmed up
- '/srv/mono/mydir:/root/mydir'
command: sleep infinity
And Dockerfile
#/bin/bash
FROM php:5.6
RUN mkdir /root/mydir && echo '{}' > /root/mydir/myfile.json
VOLUME /root/mydir
Directory /srv/mono/mydir
is empty. It was replaced with host directory. It's clear.
But how to keep original files?
For example it work for MySQL Percona containers:
version: '2'
services:
percona-56:
container_name: percona-56
image: percona/percona-server:5.6
volumes:
- /srv/mysql/percona-56:/var/lib/mysql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
There are original files from the container:
$ ll /srv/mysql/percona-56
total 176220
auto.cnf
error.log
ibdata1
ib_logfile0
ib_logfile1
init.ok
mysql
performance_schema
I've tried to inspect Percona Dockerfile but didn't find something related to the volume sharing.
$ docker --version
Docker version 1.12.3, build 6b644ec
Volumes are stored in a part of the host filesystem which is managed by Docker ( /var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this part of the filesystem. Volumes are the best way to persist data in Docker. Bind mounts may be stored anywhere on the host system.
Unlike bind mount, where you can mount any directory from your host, volumes are stored in a single location (most likely /var/lib/docker/volumes/ on unix systems) and greatly facilitates managing data (backup, restore, and migration). Docker volumes can safely be shared between several running containers.
Use bind mounts. With bind mounts, you control the exact mountpoint on the host. This approach persists data, but is often used to provide more data into containers. You can use a bind mount to mount source code into the container to let it see code changes, respond, and let you see the changes right away.
When you add a VOLUME at docker run, what you are saying is to use the main host filesystem instead of the copy-on-write filesystem that Docker uses for images. There are two main options here:
You are looking to get both- you want a fixed location on your filesystem, but you want files from your image to be there. Now, there is a reason it doesn't work this way! What happens if 'auto.conf' already exists in that folder and you launch your container? What happens if you run two containers with different versions of that file pointed at the same location? That is why if you pick a real location, it does not attempt to guess what to do with conflicts between the image and the filesystem, it just goes with the filesystem.
You CAN achieve what you want though. There are really two options. The better one would be to have your app read from two seperate folders- one that is populated inside the image, and one that is on your filesystem. That completely avoids this problem ;) The second option is to go in and tell Docker how to handle individual files in your image.
version: '2'
services:
app:
container_name: mono
build: .
volumes:
# save .composer files on host to keep cache warmed up
- '/srv/mono/mydir:/root/mydir'
# Marking a volume this way will tell Docker to use THIS file
# from the image, even though the parent directory is a regular
# volume. If you have an auto.cnf file in your directory, it
# will be ignored.
- /root/mydir/auto.cnf
command: sleep infinity
......
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With