Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Gitlab CI pipeline - continue to next stage only on a certain condition

I am trying to build a Gitlab pipeline that is made up of 4 jobs. The stages I have are:

stages:
- compare
- build
- test
- deploy

The compare stage is taking a dump from an API on another server, comparing it to the same dump from the last successful pipeline run (it's made available as an artifact) then comparing the two. If there is any difference I would like the pipeline to move onto the next stage, if there is no difference then I would like it to exit gracefully.

I have it working but rather than exiting gracefully if there are no differences it fails and the pipeline is marked as failed, here is how it looks.

enter image description here

Here is the important code from my .gitlab-ci.yaml (with some identifying information removed )

Get_inventory_dump:
  stage: compare  
  only:
    - schedules
  script: 
    - 'curl -k --output "previous-inventory.json" --header "PRIVATE-TOKEN: $user_token" "https://url/to/get/artifact/from/last/successful/run"'
    - python3 auto_config_scripts/dump_device_inventory_api_to_json.py -p $pass -o /inventory.json -u https://url/for/inventory/dump -y
    - /usr/bin/cmp previous-inventory.json inventory.json && echo "No Change in inventory since last successful run" && exit 1 || echo "Inventory has changed since last run, continue" && exit 0
  artifacts:
    when: on_success
    expire_in: 4 weeks
    paths:
     - inventory.json

Generate_icinga_config:
  stage: build
  only:
    - schedules
  when: on_success
  script: 

Everything is behaving as I would expect but I feel like there is a better way to do this.

Is there a way, if the comparison is the same to simply skip the next stages of the pipeline but still have the pipeline completed as 'passed' rather than 'failed'?

like image 427
Martin W Avatar asked Feb 15 '19 09:02

Martin W


People also ask

How does GitLab decide which jobs to run in a pipeline?

When a new pipeline starts, GitLab checks the pipeline configuration to determine which jobs should run in that pipeline. You can configure jobs to run depending on the status of variables, the pipeline type, and so on.

What is a GitLab CI/CD pipeline?

Watch the “Mastering continuous software development” webcast to see a comprehensive demo of a GitLab CI/CD pipeline. Pipelines are the top-level component of continuous integration, delivery, and deployment. Jobs, which define what to do. For example, jobs that compile or test code. Stages, which define when to run the jobs.

How to schedule a GitLab job to run automatically?

This job can no longer be scheduled to run automatically. You can, however, execute the job manually. To start a delayed job immediately, select Play ( ). Soon GitLab Runner starts the job. To split a large job into multiple smaller jobs that run in parallel, use the parallel keyword in your .gitlab-ci.yml file.

When to use allow_failure in GitLab?

Add allow_failure: false to the protected manual job and the pipeline’s next stages only run after the manual job is triggered by authorized users. Introduced in GitLab 11.4. Use when: delayed to execute scripts after a waiting period, or if you want to avoid jobs immediately entering the pending state.


Video Answer


2 Answers

There are two solutions I can think of. Unfortunately, they either come slightly confusing UI behavior or you have to adapt all jobs.

Job attributes like only or changes are only concerned with the state of or the files of the git repository (see https://docs.gitlab.com/ee/ci/yaml/) and therefore not of use here as the file is only created during CI and not part of the repository.

Solution 1: You can allow_failure: true to the first job. This will mark the pipeline as successful despite the job failing and subsequent jobs will not be executed as the first job did not succeed. The drawback is that when you investigate the pipeline there will be an exclamation mark instead of a green check for this job.

Solution 2: Instead of failing the first job when there are no changes the inventory.json file is removed. And all subsequent jobs directly terminate with exit code 0 when the file doesn't exist. Note that this only works because inventory.json is marked as an artifact.

like image 78
fzgregor Avatar answered Sep 22 '22 10:09

fzgregor


Based on Fzgregors suggestion, this is how I solved my problem: If there was a difference and I wanted my second stage to actually do some work I created a file called "continue" and made it available as an artifact.

The second stage will look for that file and use an IF statement to decide if it should do something or just exit nicely

Get_inventory_dump:
  stage: compare  
  only:
    - schedules
  script: 
    - 'curl -k --output "previous-inventory.json" --header "PRIVATE-TOKEN: $user_token" "https://url/to/get/artifact/from/last/successful/run"'
    - python3 auto_config_scripts/dump_device_inventory_api_to_json.py -p $pass -o /inventory.json -u https://url/for/inventory/dump -y
    - /usr/bin/cmp previous-inventory.json inventory.json && echo "No Change in inventory since last successful run" || echo "Inventory has changed since last run, continue" && touch continue
  artifacts:
    when: on_success
    expire_in: 4 weeks
    paths:
     - inventory.json
     - continue

Generate_icinga_config:
  stage: build
  only:
    - schedules
  when: on_success
  script: 
    - if [[ -f continue ]]; then
        do some stuff;
      else
        echo "No Change in inventory, nothing to do";
      fi

This allowed me to keep my inventory artifact but at the same time let the next stage know if it needed to do some work or just do nothing and exit with code 0

like image 23
Martin W Avatar answered Sep 19 '22 10:09

Martin W