I created a docker container where I mount a fuse S3QL FS. And this is working.
Now I'd like to be able to share this mount point with the host or with other containers but it does not work.
To make it short, I run the container that way :
docker run --rm -d -v /s3ql:/s3ql \
--cap-add SYS_ADMIN --device /dev/fuse \
--name myContainer \
myS3qlIimage mount.s3ql swiftks://url:container /s3ql
docker exec myContainer ls /s3ql
shows the actual S3QL content but /s3ql
on host is empty.
More details on how I did so far on my repo: https://gitlab.com/Salokyn/docker-s3ql
Do you think it is possible to make that work ?
Multiple containers can run with the same volume when they need access to shared data. Docker creates a local volume by default. However, we can use a volume diver to share data across multiple machines. Finally, Docker also has –volumes-from to link volumes between running containers.
The VOLUME command will mount a directory inside your container and store any files created or edited inside that directory on your hosts disk outside the container file structure, bypassing the union file system.
Normally, when you start a Docker container, it is run in a private mount namespace: this means that (a) filesystems mounted inside the container won't be visible on the host, and (b) filesystems mounted on the host won't be visible inside the container.
You can modify this behavior using the bind-propagation flag to the --mount
option. There are six values available for this flag:
shared
: Sub-mounts of the original mount are exposed to replica mounts, and sub-mounts of replica mounts are also propagated to the original mount.slave
: similar to a shared mount, but only in one direction. If the original mount exposes a sub-mount, the replica mount can see it. However, if the replica mount exposes a sub-mount, the original mount cannot see it.private
: The mount is private. Sub-mounts within it are not exposed to replica mounts, and sub-mounts of replica mounts are not exposed to the original mount.rshared
: The same as shared, but the propagation also extends to and from mount points nested within any of the original or replica mount points.rslave
: The same as slave, but the propagation also extends to and from mount points nested within any of the original or replica mount points.rprivate
: The default. The same as private, meaning that no mount points anywhere within the original or replica mount points propagate in either direction.
Based on your question, you probably want the rshared
option, which would permit mounts inside the container to be visble on the host. This means your docker
command line would look something like:
docker run --rm \
--mount type=bind,source=/s3ql,target=/s3ql,bind-propagation=rshared \
--cap-add SYS_ADMIN --device /dev/fuse --name myContainer \
myS3qlIimage mount.s3ql swiftks://url:container /s3ql
But there may be a second problem here: if your fuse mount requires a persistent process in order to function, this won't work, because your container is going to exit as soon as the mount
command completes, taking any processes with it. In this case, you would need to arrange for the container to hang around for as long as you need the mount active:
docker run -d \
--mount type=bind,source=/s3ql,target=/s3ql,bind-propagation=rshared \
--cap-add SYS_ADMIN --device /dev/fuse --name myContainer \
myS3qlIimage sh -c 'mount.s3ql swiftks://url:container /s3ql; sleep inf'
(This assumes that you have a version of the sleep
command that supports the inf
argument to sleep forever).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With