I have been asked by dev ops at the company I am working for to do something a little different with Docker then I am used too. The goal is to have 2 containers with the following responsibilities:
Container A:
Node container that will build the frontend react application and place the bundle into a directory called app/dist/
. When this is complete, the container will stop running.
Container B:
A alpine nginx container which will server static files from /usr/share/nginx/html/app
.
The files which have been built in Container A will be provided to Container B using a volume which will mount <Container A>/app/dist
to <Container B>/usr/share/nginx/html/app
.
Please note there is an HAProxy layer between the public accessible port and the nginx container which is container called app
.
The tasks above are being orchestrated using a docker compose file which looks like the following:
version: '2'
volumes:
webapp_build_volume: {}
services:
webapp_build:
build:
context: .
dockerfile: 'config/nginx/dockerfile-builder'
volumes:
- webapp_build_volume:/app/dist
- webapp_static_volume:/app/src/app/static
app:
build:
context: 'config/haproxy'
dockerfile: 'dockerfile-app-haproxy'
links:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- '80:80'
- '1936:1936'
web:
build:
context: .
dockerfile: 'config/nginx/dockerfile-web'
environment:
- EXCLUDE_PORTS=443
- VIRTUAL_HOST=*
depends_on:
- webapp_build
volumes:
- webapp_build_volume:/usr/share/nginx/html/app
This is currently working only the first time the docker compose file is built. The files in the volume no longer update after the volume has been created. I have read that named volumes can not be updated after they have been established but I can not confirm this. I have found work arounds which involve running docker-compose rm --force && docker volume webapp_build rm
but I would like to not have to kill the cached containers if possible since the CI service will become too slow.
Please let me know if I can clarify anything (I understand there is a lot of moving parts here). Please note I am also using the docker 2 beta though I do not see how that could change anything I have done here.
You can manage volumes using Docker CLI commands or the Docker API. Volumes work on both Linux and Windows containers. Volumes can be more safely shared among multiple containers. Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality.
If you want to force Compose to stop and recreate all containers, use the --force-recreate flag. If the process encounters an error, the exit code for this command is 1 . If the process is interrupted using SIGINT (ctrl + C) or SIGTERM , the containers are stopped, and the exit code is 0 .
By pushing a new Docker image to your repository, Watchtower will automatically trigger a chain of events to update your running container's base Docker image. When Watchtower detects a new push, it will pull the new base image, gracefully shutdown your running container, and start it back up.
Multiple containers can run with the same volume when they need access to shared data. Docker creates a local volume by default. However, we can use a volume diver to share data across multiple machines. Finally, Docker also has –volumes-from to link volumes between running containers.
It's a little hard to follow, but it sounds like you are building an image, outputting files into what you believe is a volume, and trying to use that to populate a named volume used by another running container.
Most likely your confusion is that building a container doesn't mount volumes, volumes are only mounted in running containers. The named volume does have a feature where it will be populated by the contents of an image, but only when you mount a named volume that's empty. It appears you're taking advantage of this feature on the first build+run, but it won't work again on future builds. If you run your build container without a volume, you'll find that your files are there as expected.
You can easily update a named volume. Two options come to mind. One is to use your current process, but change the volume mount point to something like "/target" and as your CMD
of your build container, copy the contents of your source to "/target". That would look like:
Dockerfile
...
RUN compile-cmd --output-to /local/build/dir
entrypoint.sh:
cp -a /local/build/dir/* /target/
docker-compose.yml:
version: '2'
services:
webapp_build:
build:
context: .
dockerfile: 'config/nginx/dockerfile-builder'
volumes:
- webapp_build_volume:/target
...
The second option is to not do this in the container build at all, but rather make a container with your application compiling prerequisites. Then mount your application code as a volume into this container with a CMD
or ENTRYPOINT
that takes the code volume contents, compiles it, and outputs it to the named volume that's also mounted. Then, instead of building the build container, you simply run the compile container with two volumes mounted.
entrypoint.sh:
compile-cmd --input-src=/source --output-to /target
docker-compose.yml:
version: '2'
services:
webapp_build:
volumes:
- app/source:/source
- webapp_build_volume:/target
...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With