Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I speed up node.js react startup in a Docker container

I am running node js official image inside Docker container and I noticed that the npm start command takes a lot longer to start than when it's outside of Docker.

Are there settings that I can change to make it run faster? Perhaps allocating more memory to the container?

For reference I will paste relevant files below.

Dockerfile:

FROM node:8.1

WORKDIR var/www/app

# Global install yarn package manager
RUN apt-get update && apt-get install -y curl apt-transport-https && \
    curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
    echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list && \
    apt-get update && apt-get install -y yarn

RUN npm install -g create-react-app

The command I use to start my container:

docker run --rm -ti \
--link api-container:api \
--name my-container -p 3000:3000 \
-v $(pwd):/var/www/app nxmohamad/my-container \
bash

and the start script is just NODE_PATH=. react-scripts start

like image 968
nxmohamad Avatar asked Jan 04 '23 03:01

nxmohamad


2 Answers

Bind mounting a directory from a Host > VM > Container with osxfs or via HyperV will be slower than normal file access. The Linux file cache is impacted to achieve "consistency" between the host and container. Some applications that depend on the file cache for speed can slow down. PHP web apps with frameworks are hit in particular as they load all files on each request.

React is likely in a slightly better position as the file reads only happens once on startup, but those reads will still be slow each startup.

Anything that actively writes to a directory is just going to be slower.

Workarounds

Caching

Some caching options were added to mounts in Docker 17.06 so users can control the mounts beyond the default 'consistent' level where all reads are passed out to OSX from the container.

It's likely the node_modules directory is the main cause of slowness, it's also the safest place to enable caching on as it doesn't change often.

This setup can get verbose depending on your directory structure as you have to mount each item in your app directory independently:

docker run --rm -ti \
  --link api-container:api \
  --name my-container -p 3000:3000 \
  -v $(pwd)/index.js:/var/www/app/index.js \
  -v $(pwd)/package.json:/var/www/app/package.json \
  -v $(pwd)/src:/var/www/app/src \
  -v $(pwd)/node_modules:/var/www/app/node_modules:cached \
  nxmohamad/my-container \
  bash

Syncing

The other option is using a tool like rsync or unison to keep a local volume in sync rather than relying on bind mounts from OSX or Windows.

A tool called docker-sync was written specifically for this. Getting a working configuration can be a bit difficult and it can get itself in a tangle sometimes (it's caused a couple of kernel oopses if I leave it running over a suspend) but it works in the end.

like image 131
Matt Avatar answered Jan 05 '23 16:01

Matt


Windows

Matt's answer seems to help Mac users more-so than Windows users. If you're on Windows you should run your Docker commands in your Linux distro. It was a night and day difference for me. No messing with caching or anything. If you already have Docker Desktop installed you just have to make sure you already have a Linux distro installed. It's easy to setup if you don't.

Basically, any read/write process between Windows and Linux takes a long time. If you run your Docker container inside Windows Subsystem for Linux the file read/write is near instantaneous because the file are going from Linux to Linux. You will have to move your files from whatever directory in Windows to a directory in your Linux distro, but that shouldn't be a problem assuming you're using git.

Resources:

Article that kind of explains it

Docker documentation on setting up WSL 2 and a Linux Distro

VSCode working with WSL 2

like image 31
Michael Cox Avatar answered Jan 05 '23 17:01

Michael Cox