Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Gitlab-CI: Specify that Job C should run after Job B if Job A fails

Imagine you have the following Pipeline:

Job A (deploy) -> Job B (test) -> Job C (remove test deployment)

The pipeline should deploy a test image and test it after a successful deployment. After the test I want to run a cleanup script regardless of the test output, but only if the test image (Job A) was deployed.

To summarize this: I want Gitlab to execute Job C only if Job A succeeds, but after Job B.

Things that won't work:

  • when: on-failure (Job A or Job B could failed, but only Job A is important)
  • when: always (maybe Job A failed which causes Job C to fail)
  • when: on-success (requires all jobs to succeed)

I know that GitLab has a feature called DAG Pipelines which allow you to specify multiple dependencies on other jobs with the needs keyword, but sadly the when keyword is always scoped to all prior jobs. So you are not able to say something like:

when:
    on-success: job-a
    always: job-b

Do I miss something or is there no way to achieve such a behaviour?

like image 323
Sebi2020 Avatar asked Oct 05 '20 08:10

Sebi2020


People also ask

Which file is used to specify the jobs within the GitLab CI pipeline?

yml` file | GitLab.

How do I run parallel jobs in GitLab?

To run our tests in parallel as part of our GitLab CI/CD pipeline, we need to tell GitLab to start multiple test jobs at the same time. In each of the test jobs we then run a different subset of our test suite. For our example project we start by adding separate build , test and deploy jobs.

What is used to restrict when a job is executed on your pipeline?

Run jobs for scheduled pipelines To configure a job to be executed only when the pipeline has been scheduled, use the rules keyword.

Can GitLab runner run multiple jobs?

One way to allow more jobs to run simultaneously is to simply register more runners. Each installation of GitLab Runner can register multiple distinct runner instances. They operate independently of each other and don't all need to refer to the same coordinating server.


2 Answers

The needs DAG field can be used to conditionally execute the cleanup (Job C), if Job B fails or succeeds, but NOT when it is skipped because Job A failed.

Create 2 cleanup jobs that match the following boolean conditions:

  • (Job A succeeds and Job B succeeds): If all previous tasks succeed (Job A and Job B), we can run the cleanup with when: on_success. However, this will not trigger if Job A succeeds and Job B fails.
  • (Job A succeeds and Job B fails): To circumvent the previous scenario with an untriggered cleanup (Job C), we make use of the fact that if Job B fails, this implies that Job A succeeded in the pipeline. By creating a duplicate cleanup task and specifying a needs tag on Job B and when: on_failure, the cleanup task will only run if Job A succeeds and Job B fails.

To reiterate: a cleanup job will run if (Job A succeeds and Job B succeeds) or (Job A succeeds and Job B fails), which by boolean expression reduction is equivalent to (Job A succeeds).

An obvious caveat here is that there are now 2 cleanup jobs that are displayed in the pipeline; however, they are mutually exclusive and only one could ever be executed.

Here is a sample configuration:

stages:
  - deploy
  - test
  - cleanup

deploy_job:
  stage: deploy
  script:
    - echo Deployed
    - "true"
  when: always

test_job:
  stage: test
  script:
    - echo Executing tests
    - "true"
  when: on_success

# a YAML anchor reduces repetition
.cleanup_job: &cleanup_job
  stage: cleanup
  script:
    - echo Cleaned up deployment

cleanup_deployment_success:
  when: on_success
  <<: *cleanup_job

cleanup_deployment_failure:
  needs: ["test_job"]
  when: on_failure
  <<: *cleanup_job

With various intentional fail conditions, the following pipeline states are produced:

  • failed pipeline: Job A succeeds and Job B fails
  • failed pipeline: Job A fails and Job B is skipped
  • passed pipeline: Job A succeeds and Job B succeeds

Logically, this indicates that regardless of whether Job B succeeded or failed, Job C runs if Job A succeeded. Furthermore, the failure state is preserved in the overall pipeline.

like image 82
concision Avatar answered Oct 22 '22 09:10

concision


The needs DAG field can be used to conditionally execute the cleanup (Job C), if Job B fails or succeeds, but NOT when it is skipped because Job A failed.

That might have changed with GitLab 13.11 (April 2021)

Optional DAG ('needs:') jobs in CI/CD pipelines

The directed acyclic graph (DAG) in GitLab CI/CD lets you use the needs syntax to configure a job to start earlier than its stage (as soon as dependent jobs complete). > We also have the rules, only, or except keywords, which determine if a job is added to a pipeline at all.

Unfortunately, if you combine needs with these other keywords, it’s possible that your pipeline could fail when a dependent job does not get added to a pipeline.

In this release, we are adding the optional keyword to the needs syntax for DAG jobs.

  • If a dependent job is marked as optional but not present in the pipeline, the needs job ignores it.
  • If the job is optional and present in the pipeline, the needs job waits for it to finish before starting.

This makes it much easier to safely combine rules, only, and except with the growing popularity of DAG.

https://about.gitlab.com/images/13_11/optional.png -- Optional DAG ('needs:') jobs in CI/CD pipelines

See Documentation and Issue.

like image 40
VonC Avatar answered Oct 22 '22 10:10

VonC