Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Concourse CI - Build Artifacts inside source, pass all to next task

I want to set up a build pipeline in Concourse for my web application. The application is built using Node.

The plan is to do something like this:

                                        ,-> build style guide -> dockerize
source code -> npm install -> npm test -|
                                        `-> build website -> dockerize

The problem is, after npm install, a new container is created so the node_modules directory is lost. I want to pass node_modules into the later tasks but because it is "inside" the source code, it doesn't like it and gives me

invalid task configuration:
  you may not have more than one input or output when one of them has a path of '.'

Here's my jobs set up

jobs:
  - name: test
    serial: true
    disable_manual_trigger: false
    plan:
      - get: source-code
        trigger: true

      - task: npm-install
        config:
          platform: linux
          image_resource:
            type: docker-image
            source: {repository: node, tag: "6" }
          inputs:
            - name: source-code
              path: .
          outputs:
            - name: node_modules
          run:
            path: npm
            args: [ install ]

      - task: npm-test
        config:
          platform: linux
          image_resource:
            type: docker-image
            source: {repository: node, tag: "6" }
          inputs:
            - name: source-code
              path: .
            - name: node_modules
          run:
            path: npm
            args: [ test ]

Update 2016-06-14

Inputs and outputs are just directories. So you put what you want output into an output directory and you can then pass it to another task in the same job. Inputs and Outputs can not overlap, so in order to do it with npm, you'd have to either copy node_modules, or the entire source folder from the input folder to an output folder, then use that in the next task.

This doesn't work between jobs though. Best suggestion I've seen so far is to use a temporary git repository or bucket to push everything up. There has to be a better way of doing this since part of what I'm trying to do is avoid huge amounts of network IO.

like image 470
DanielM Avatar asked Feb 07 '23 19:02

DanielM


1 Answers

There is a resource specifically designed for this use case of npm between jobs. I have been using it for a couple of weeks now:

https://github.com/ymedlop/npm-cache-resource

It basically allow you to cache the first install of npm and just inject it as a folder into the next job of your pipeline. You could quite easily setup your own caching resources from reading the source of that one as well, If you want to cache more than node_modules.

I am actually using this npm-cache-resource in combination with a Nexus proxy to speed up the initial npm install further.

Be aware that some npm packages have native bindings that need to be built with the standardlibs that matches the containers linux versions standard libs so, If you move between different types of containers a lot you may experience some issues with libmusl etc, in that case I recommend either streamlinging to use the same container types through the pipeline or rebuilding the node_modules in question...

There is a similar one for gradle (on which the npm one is based upon) https://github.com/projectfalcon/gradle-cache-resource

like image 147
David Karlsson Avatar answered Feb 16 '23 04:02

David Karlsson