Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Run Grunt / Gulp inside Docker container or outside?

I'm trying to identify a good practice for the build process of a nodejs app using grunt/gulp to be deployed inside a docker container.

I'm pretty happy with the following sequence:

  • build using grunt (or gulp) outside container
  • add ./dist folder to container
  • run npm install (with --production flag) inside container

But in every example I find, I see a different approach:

  • add ./src folder to container
  • run npm install (with dev dependencies) inside container
  • run bower install (if required) inside container
  • run grunt (or gulp) inside container

IMO, the first approach generates a lighter and more efficient container, but all of the examples out there are using the second approach. Am I missing something?

like image 764
santi Avatar asked Apr 30 '15 12:04

santi


3 Answers

I'd like to suggest a third approach that I have done for a static generated site, the separate build image.

In this approach, your main Dockerfile (the one in project root) becomes a build and development image, basically doing everything in the second approach. However, you override the CMD at run time, which is to tar up the built dist folder into a dist.tar or similar.

Then, you have another folder (something like image) that has a Dockerfile. The role of this image is only to serve up the dist.tar contents. So we do a docker cp <container_id_from_tar_run> /dist. Then the Dockerfile just installs our web server and has a ADD dist.tar /var/www.

The abstract is something like:

  • Build the builder Docker image (which gets you a working environment without webserver). At thist point, the application is built. We could run the container in development with grunt serve or whatever the command is to start our built in development server.
  • Instead of running the server, we override the default command to tar up our dist folder. Something like tar -cf /dist.tar /myapp/dist.
  • We now have a temporary container with a /dist.tar artifact. Copy it to your actual deployment Docker folder we called image using docker cp <container_id_from_tar_run> /dist.tar ./image/.
  • Now, we can build the small Docker image without all our development dependencies with docker build ./image.

I like this approach because it is still all Docker. All the commands in this approach are Docker commands and you can really slim down the actual image you end up deploying.

If you want to check out an image with this approach in action, check out https://github.com/gliderlabs/docker-alpine which uses a builder image (in the builder folder) to build tar.gz files that then get copied to their respective Dockerfile folder.

like image 78
Andy Shinn Avatar answered Nov 11 '22 01:11

Andy Shinn


The only difference I see is that you can reproduce a full grunt installation in the second approach.

With the first one, you depend on a local action which might be done differently, on different environments.

A container should be based in an image that can be reproduced easily instead of depending on an host folder which contains "what is needed" (not knowing how that part has been done)


If the build environment overhead which comes with the installation is too much for a grunt image, you can:

  • create an image "app.tar" dedicated for the installation (I did that for Apache, that I had to recompile, creating a deb package in a shared volume).
    In your case, you can create an archive ('tar') of the app installed.
  • creating a container from a base image, using the volume from that first container

    docker run --it --name=app.inst --volumes-from=app.tar ubuntu untar /shared/path/app.tar
    docker commit app.inst app
    

Then end result is an image with the app present on its filesystem.

This is a mix between your approach 1 and 2.

like image 28
VonC Avatar answered Nov 11 '22 00:11

VonC


A variation of the solution 1 is to have a "parent -> child" that makes the build of the project really fast. I would have dockerfile like:

FROM node
RUN mkdir app
COPY dist/package.json app/package.json
WORKDIR app
RUN npm install

This will handle the installation of the node dependencies, and have another dockerfile that will handle the application "installation" like:

FROM image-with-dependencies:v1
ENV NODE_ENV=prod
EXPOSE 9001
COPY dist .
ENTRYPOINT ["npm", "start"]

with this you can continue your development and the "build" of the docker image is going to be faster of what it would be if you required to "re-install" the node dependencies. If you install new dependencies on node, just re-build the dependencies image.

I hope this helps someone.

Regards

like image 33
cesaregb Avatar answered Nov 11 '22 01:11

cesaregb