I’m trying to improve the build time of my automation. Right now it takes 14 mins just to build the front-end.
This is what I got so far
web.dockerfile
### STAGE 1: Build ###
FROM node:9.3.0-alpine as builder
COPY package.json ./
RUN npm set progress=false && npm config set depth 0 && npm cache clean --force
## Storing node modules on a separate layer will prevent unnecessary npm installs at each build
RUN npm i
RUN mkdir /web
RUN cp -R ./node_modules ./web
WORKDIR /web
COPY . .
RUN $(npm bin)/ng build --prod --build-optimizer
### STAGE 2: Setup ###
FROM nginx:1.13.8-alpine
COPY nginx.conf /etc/nginx/nginx.conf
COPY site.conf /etc/nginx/conf.d/default.conf
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /web/dist /usr/share/nginx/html/
RUN touch /var/run/nginx.pid && \
chown -R nginx:nginx /var/run/nginx.pid && \
chown -R nginx:nginx /var/cache/nginx && \
chown -R nginx:nginx /usr/share/nginx/html
USER nginx
RUN $(npm bin)/ng build --prod --build-optimizer
This line above is taking so long almost the entire 99% of the build time.
.angular-cli.json
{
"$schema": "./node_modules/@angular/cli/lib/config/schema.json",
"project": {
"name": "web"
},
"apps": [{
"root": "src",
"outDir": "dist",
"assets": [
"assets",
"favicon.ico"
],
"index": "index.html",
"main": "main.ts",
"polyfills": "polyfills.ts",
"test": "test.ts",
"tsconfig": "tsconfig.app.json",
"testTsconfig": "tsconfig.spec.json",
"prefix": "app",
"styles": [
"styles.css",
"../node_modules/bootstrap/dist/css/bootstrap.min.css",
"../node_modules/ngx-toastr/toastr.css",
"../src/assets/css/style.css",
"../src/assets/css/colors/blue.css"
],
"scripts": [
"../node_modules/jquery/dist/jquery.min.js",
"../node_modules/popper.js/dist/umd/popper.min.js",
"../node_modules/bootstrap/dist/js/bootstrap.min.js",
"../node_modules/jquery-slimscroll/jquery.slimscroll.min.js",
"../node_modules/pace-js/pace.min.js"
],
"environmentSource": "environments/environment.ts",
"environments": {
"dev": "environments/environment.ts",
"prod": "environments/environment.prod.ts"
}
}],
"e2e": {
"protractor": {
"config": "./protractor.conf.js"
}
},
"lint": [{
"project": "src/tsconfig.app.json",
"exclude": "**/node_modules/**"
},
{
"project": "src/tsconfig.spec.json",
"exclude": "**/node_modules/**"
},
{
"project": "e2e/tsconfig.e2e.json",
"exclude": "**/node_modules/**"
}
],
"test": {
"karma": {
"config": "./karma.conf.js"
}
},
"defaults": {
"styleExt": "css",
"component": {}
}
}
DockerCloud connect to my AWS
AWS : EC2 micro
This dockerfile works perfectly and it build success.
But it takes about 14 minutes to build. Is it possible to improve this? Is it because of my instance have too little processor?
[TL;DR]
COPY . .
. Relative path issues and possible information leaks.dist
) and tag based on failure rates[LONG VERSION]
There is a good chance that your npm dependencies are being re-downloaded and/or your docker images are being rebuilt for every build you run.
Rather than copying files into a docker image, it would be better to mount volumes for modules and cache so that additional dependencies included later doesn't need to be downloaded again. Typical directories that you should consider creating volumes for are npm_modules (one for global and one for local) and .npm (cache).
Your package.json
is being copied into root /
and the same package.json
is being copied into /web
with COPY . .
.
The initial run of ( [EDIT] - It would seem that npm i
is installing into /
and you're running it again for /web
. You're downloading dependencies twice but are the modules in /
going to be used for anything? Regardless, you appear to be using the same package.json in both npm i
and ng build
, so the same thing is being done twice,ng build
doesn't redownload packages) but node_modules isn't available in /
so the npm i
command creates another one and re-downloads all packages.
You create a web
directory in root /
but there are other commands instructing to relative paths ./web
. Are you certain that things are running in the right places? There is no guarantee that programs would be looking in the directories you want them to if you use relative paths. While it may appear to work for this image, the same practice will not be consistent across other images that may have different initial work directories.
[may or may not be relevant information]
Although I'm not using Bitbucket for automating builds, I faced a similar issue when running Jenkins pipelines. Jenkins placed the project in a different directory so that every time it runs, all the dependencies would be downloaded again. I initially thought the project would be in /home/agent/project
but it was actually placed elsewhere. I found the directory where the project was copied to by using the pwd
and npm cache verify
command in a build step, then mounted the volumes to the correct places. You can view the output in the logs generated on builds.
You can view the output by expanding the section within the pipelines page.
If the image is being rebuilt on every run, build your image separately then push the image to a registry. Configure the pipeline file to use your image instead. You should try to use already available base images whenever possible unless there are other dependencies you need that are unavailable in the base image (things like alpine's apk packages and not npm. npm dependencies can be stored in volumes). If you're going to use a public registry, do not store any files that may contain sensitive data. Configure your pipeline so that things are mounted with volumes and/or uses secrets.
A basic restructure on the test and build steps.
Image on Docker Hub
|
|
---|-------------------------------------|
| | |
V V |
Commit -- build (no test) ---> e2e tests (no build)-]--+--> archive build --> (deploy/merge/etc)
| _______________| ^
| v |
|-> unit tests (no build)---->|
You don't need to follow it entirely, but it should give you an idea on how you could use parallel steps to separate things and improve completion times.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With