My Jenkins pipeline runs several tasks in parallel. It appears that if one stage fails, all subsequent stages will run their failure
post block (whether they actually failed or not). I don't know if this is by design or if I'm doing something wrong.
Note: This pipeline runs on a Windows node, hence the bat('exit /b 1')
pipeline {
agent any
stages {
stage('Parallel steps') {
parallel {
stage('Successful stage') {
steps {
script { sleep 10 }
}
post {
failure {
echo('detected failure: Successful stage')
}
}
}
stage('Failure stage') {
steps {
script { bat('exit /b 1') }
}
post {
failure {
echo('detected failure: Failure stage')
}
}
}
}
}
}
}
In the above pipeline, only 'Failure stage' fails, yet in the output I see this, indicating the failure
conditional executed for both steps!
Started by user Doe, John
Running on WINDOWS_NODE in D:\workspace
[Successful stage] Sleeping for 10 sec
[Failure stage] [test] Running batch script
[Failure stage] D:\workspace>exit /b 1
Post stage
[Pipeline] [Failure stage] echo
[Failure stage] detected failure: Failure stage
[Failure stage] Failed in branch Failure stage
Post stage
[Pipeline] [Successful stage] echo
[Successful stage] detected failure: Successful stage
ERROR: script returned exit code 1
Finished: FAILURE
What's the best way for me to detect which parallel stage failed and report it to the overall pipeline?
Alternatively you can call error(String message) step to stop the pipeline and set its status to FAILED . For example, if your stage 1 calls error(msg) step like: stage("Stage 1") { steps { script { error "This pipeline stops here!" } } }
So the code presented above locates all of the project directories, creates an array of stages where each stage builds a different project, and then runs all stages in parallel. This means that without updating the Jenkinsfile each time we have a new project, all projects will be built in parallel.
This plugin adds a tool that lets you easily execute tests in parallel. This is achieved by having Jenkins look at the test execution time of the last run, split tests into multiple units of roughly equal size, then execute them in parallel.
On Semaphore, a fail-fast strategy means that you get instant feedback when a job fails. All the jobs running in a pipeline can be stopped (and subsequent jobs canceled) when a job fails. You don't need to wait for all the other jobs to complete to get feedback.
It looks like this is a known bug with Declarative Pipelines. I had to give up using the built-in post->failure block and use try/catch instead, which has its own problems:
This code works correctly. Only the failing stage echoes "detected failure" instead of both.
pipeline {
agent any
stages {
stage('Parallel steps') {
parallel {
stage('Successful stage') {
steps {
script {
try {
sleep 10
} catch (e) {
echo('detected failure: Successful stage')
throw(e)
}
}
}
}
stage('Failure stage') {
steps {
script {
try {
bat('exit /b 1')
} catch (e) {
echo('detected failure: Failure stage')
throw(e)
}
}
}
}
}
}
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With