I'm trying to identify a good practice for the build process of a nodejs app using grunt/gulp to be deployed inside a docker container.
I'm pretty happy with the following sequence:
But in every example I find, I see a different approach:
IMO, the first approach generates a lighter and more efficient container, but all of the examples out there are using the second approach. Am I missing something?
I'd like to suggest a third approach that I have done for a static generated site, the separate build image.
In this approach, your main Dockerfile
(the one in project root) becomes a build and development image, basically doing everything in the second approach. However, you override the CMD
at run time, which is to tar up the built dist
folder into a dist.tar
or similar.
Then, you have another folder (something like image
) that has a Dockerfile
. The role of this image is only to serve up the dist.tar
contents. So we do a docker cp <container_id_from_tar_run> /dist
. Then the Dockerfile
just installs our web server and has a ADD dist.tar /var/www
.
The abstract is something like:
builder
Docker image (which gets you a working environment without webserver). At thist point, the application is built. We could run the container in development with grunt serve
or whatever the command is to start our built in development server.tar -cf /dist.tar /myapp/dist
./dist.tar
artifact. Copy it to your actual deployment Docker folder we called image
using docker cp <container_id_from_tar_run> /dist.tar ./image/
.docker build ./image
.I like this approach because it is still all Docker. All the commands in this approach are Docker commands and you can really slim down the actual image you end up deploying.
If you want to check out an image with this approach in action, check out https://github.com/gliderlabs/docker-alpine which uses a builder image (in the builder folder) to build tar.gz files that then get copied to their respective Dockerfile
folder.
The only difference I see is that you can reproduce a full grunt installation in the second approach.
With the first one, you depend on a local action which might be done differently, on different environments.
A container should be based in an image that can be reproduced easily instead of depending on an host folder which contains "what is needed" (not knowing how that part has been done)
If the build environment overhead which comes with the installation is too much for a grunt image, you can:
app.tar
" dedicated for the installation (I did that for Apache, that I had to recompile, creating a deb package in a shared volume).creating a container from a base image, using the volume from that first container
docker run --it --name=app.inst --volumes-from=app.tar ubuntu untar /shared/path/app.tar
docker commit app.inst app
Then end result is an image with the app present on its filesystem.
This is a mix between your approach 1 and 2.
A variation of the solution 1 is to have a "parent -> child" that makes the build of the project really fast. I would have dockerfile like:
FROM node
RUN mkdir app
COPY dist/package.json app/package.json
WORKDIR app
RUN npm install
This will handle the installation of the node dependencies, and have another dockerfile that will handle the application "installation" like:
FROM image-with-dependencies:v1
ENV NODE_ENV=prod
EXPOSE 9001
COPY dist .
ENTRYPOINT ["npm", "start"]
with this you can continue your development and the "build" of the docker image is going to be faster of what it would be if you required to "re-install" the node dependencies. If you install new dependencies on node, just re-build the dependencies image.
I hope this helps someone.
Regards
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With