Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

ELF Header or installation issue with bcrypt in Docker container

Kind of a longshot, but has anyone had any problems using bcrypt in a linux container (specifically docker) and know of an automated workaround? I have the same issue as these two:

Invalid ELF header with node bcrypt on AWSBox

bcrypt invalid elf header when running node app

My Dockerfile

# Pull base image
FROM node:0.12

# Expose port 8080
EXPOSE 8080

# Add current directory into path /data in image
ADD . /data

# Set working directory to /data
WORKDIR /data

# Install dependencies from package.json
RUN npm install --production

# Run index.js
CMD ["npm", "start"]

I get the previously mentioned invalid ELF header error if I have bcrypt already installed in my node_modules, but if I remove it (either just itself or all my packages), it isn't installed for some reason when I build the container. I have to manually enter the container after the build and install it inside.

Is there an automated workaround?

Or maybe, just, what would be a good alternative to bcrypt with a Node stack?

like image 309
Jimmy Gong Avatar asked Jul 24 '15 07:07

Jimmy Gong


2 Answers

Liam's comment is on the money, just expanding on it for future travellers on the internets.

The issue is that you've copied your node_modules folder into your container. The reason that this is a problem is that bcrypt is a native module. It's not just javascript, but also a bunch of C code that gets compiled at the time of installation.

The binaries that come out of that compilation get stored in the node_modules folder and they're customised to the place they were built. Transplanting them out of their OSX home into a strange Linux land causes them to misbehave and complain about ELF headers and fairy feet.

The solution is to echo node_modules >> .dockerignore and run npm install as part of your Dockerfile. This means that the native modules will be compiled inside the container rather than outside it on your laptop.

With this in place, there is no need to run npm install before your start CMD. Just having it in the build phase of the Dockerfile is fine.

protip: the official node images set NODE_ENV=production by default, which npm treats the same as the --production flag. Most of the time this is a good thing. It is not a good thing when your Dockerfile also contains some build steps that rely on dev dependencies (webpack, etc). In that case you want NODE_ENV=null npm install

pro protip: you can take better advantage of Docker's caching by copying in your package.json separately to the rest of your code. Make your Dockerfile look like this:

# Pull base image
FROM node:0.12

# Expose port 8080
EXPOSE 8080

# Set working directory to /data
WORKDIR /data

# Set working directory to /data
COPY package.json /data

# Install dependencies from package.json
RUN npm install

# Add current directory into path /data in image
ADD . /data

# Run index.js
CMD npm start

And that way Docker will only re-run npm install when you change your package.json, not every time you change a line of code.

like image 149
davidbanham Avatar answered Nov 15 '22 20:11

davidbanham


Okay, so I have a working automated workaround:

Call npm install --production in the CMD instruction. I'm going to wave my hands at figuring out why I have to install bcrypt at the time of executing the container, but it works.

Updated Dockerfile

# Pull base image
FROM node:0.12

# Expose port 8080
EXPOSE 8080

# Add current directory into path /data in image
ADD . /data

# Set working directory to /data
WORKDIR /data

# Install dependencies from package.json
RUN npm install --production

# Run index.js
CMD npm install --production; npm start
like image 29
Jimmy Gong Avatar answered Nov 15 '22 22:11

Jimmy Gong