I'm planning to use Docker to deploy a node.js app. The app has several dependencies that require node-gyp. Node-gyp builds these modules (e.g. canvas, lwip, qrcode) against compiled libraries on the delivery platform, and in my experience these builds can be highly dependent on the o/s version and libraries installed, and they often break a simple npm install.
So is building my Dockerfile FROM node:version the correct approach? This seems to be the approach shown in every Docker/Node tutorial I've found so far. But if I build from a node image, what will happen when I deploy the container? How can I ensure the target host will have the libraries needed to compile the node-gyp modules?
The other way I'm looking at is to build the Dockerfile FROM ubuntu:version. But I think this would mean installing nodeJS into the Ubuntu image and the whole thing would be much larger.
Are there other ways of handling this?
Looking back (2 years later), managing node dependencies in a container is still a challenge. What I do now is:
Build the docker image FROM node:10.16.0-alpine (or other node version). These are official node images on hub.docker.com. Docker recommends alpine, and Nodejs builds on top of that, including node-gyp, so it's a good starting point;
Include a RUN apk add --no-cache to include all the libraries needed to build the dependent module, e.g. canvas (see example below);
Include a RUN npm install canvas in the docker build file; this builds the node module (e.g. canvas) into the docker image, so it gets loaded into any container run from that image.
But this can get ugly. Alpine uses different libraries from more heavy-weight OS's: notably, alpine uses musl in place of glibc. The dependent module may need to link to glibc, so then you would have to add it to the image. Sasha Gerrand offers one way to do it with alpine-pkg-glibc
Example installing node-canvas v2.5, which links to glibc:
# geo_core layer
# build on a node image, in turn built on alpine linux, Docker's official linux pulled from hub.docker.com
FROM node:10.16.0-alpine
# add libraries needed to build canvas
RUN apk add --no-cache \
build-base \
g++ \
libpng \
libpng-dev \
jpeg-dev \
pango-dev \
cairo-dev \
giflib-dev \
python \
; \
# add glibc and install canvas
RUN apk --no-cache add ca-certificates wget && \
wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub && \
wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.29-r0/glibc-2.29-r0.apk && \
apk add glibc-2.29-r0.apk && \
npm install [email protected]
;
How can I ensure the target host will have the libraries needed to compile the node-gyp modules?
The target host is running docker as well. As long as the dependencies are in your image then your server has them as well. That's the entire point with docker if you ask me. If it runs locally, then it runs on the server as well.
I'd go with node-alpine (FROM node:8-alpine
) for even smaller files. I struggled with node-gyp before I wrapped my head around it, but now I don't even see how I ever thought it was a problem. As long as you add build tools RUN apk add python make gcc g++
you are good to go (this adds some 100-200mb to the size however).
Also if it ever gets time consuming (say you find yourself rebuilding your image with --no-cache every now and then) then it can be a good idea to split it up into a base-image of your own and another image FROM my-base-image:latest
which contains things that you change a more often.
There is some learning curve for sure, but I didn't find it that steep. At least not if you have touched docker before.
The other way I'm looking at is to build the Dockerfile FROM ubuntu:version.
I had only used CentOS before jumping on docker, and I run CentOS on my servers. So I thought it would be a good idea to run CentOS-images as well, but I found that to be just silly. There is absolutely zero gain unless you need something very OS-specific. Now I've only used alpine for maybe half a year, and so far the only alpine-specific command I've needed to learn is apk add/del
.
And you probably know already, but don't spend too much time optimizing docker file size in the beginning. (You can reduce layer size a lot by combining commands on one line, (adding packages, running command, removing packages). But that cancels out the use of the docker image cache if you make any small changes in big layers. Better to leave that out until it matters.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With