I'm trying to create a relatively simple setup to develop and test npm packages. A problem was in the fact, that after you mounted a code volume to the container it replaces node_modules.
I tried a lot of generally logical stuff, mostly aimed to move node_modules to another location and then reference it within configuration files. It works, but the solution is ugly. Also, it's not good practice to install webpack globally, but my solution requires it.
However, after some time I found this solution, which looks elegant, just what I needed, but it also has one problem. I don't understand completely, how it works.
That my version of how everything operates.
Docker reorders volume mounting based on container paths
Docker mounts sub dir volume at first
Docker mounts parent dir volume but due to an unexplained mechanism, it does not override the sub dir volume...
???
PROFIT. node_modules dir is in place and webpack runs perfectly.
So, I really want to understand how it actually does all of this black magic. Because without this knowledge I feel like I'm missing something important.
So, guys, how it works?
Thanks in advance.
services:
react-generic-form:
image: react-generic-form:package
container_name: react-generic-form-package
build:
dockerfile: dev.Dockerfile
context: ./package
volumes:
- "./package:/package"
- "/package/node_modules"
Multiple containers can run with the same volume when they need access to shared data. Docker creates a local volume by default. However, we can use a volume diver to share data across multiple machines. Finally, Docker also has –volumes-from to link volumes between running containers.
Docker has multiple options to persist and share data for a running container. However, we may need more than one file storage for a running container, for example, to create backups or grant different access. Or for the same container, we may need to add named volumes and bind them to specific paths.
You can manage volumes using Docker CLI commands or the Docker API. Volumes work on both Linux and Windows containers. Volumes can be more safely shared among multiple containers. Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality.
For containers to communicate with other, they need to be part of the same “network”. Docker creates a virtual network called bridge by default, and connects your containers to it. In the network, containers are assigned an IP address, which they can use to address each other.
The Docker daemon, when it creates the container, sorts all of the mount points to avoid shadowing. (On non-Windows, this happens in (*github.com/docker/docker/daemon.Daemon).setupMounts
.) So, in your example:
/package
and /package/node_modules
contain data that's stored outside the container filespace./package
, as a bind-mount to the named host directory. (First, because it's a shorter path name.)/package/node_modules
, shadowing the equivalent directory in the previous mount, probably as a bind-mount to a directory with long hex identifier name somewhere in /var/lib/docker/volumes
.You can experiment more with this with a docker-compose.yml
file like
version: '3'
services:
touch:
image: busybox
volumes:
- ./b:/a/b
- ./a:/a
command: touch /a/b/c
Notice that whichever order you put the volumes:
in, you will get an empty directory ./a/b
(which becomes the mount point inside the container), plus an empty file ./b/c
(the result of the touch
command).
Also note the statement here that the node_modules
directory contains data, that should be persisted across container invocations, and has a lifecycle separately from either the container or its base image. Changing the image and re-running docker-compose up
will have no effect on this volume's content.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With