I have a Jenkins Build job which triggers multiple Test jobs with the test name as a parameter using the Jenkins Parameterized Trigger Plugin. This kicks off a number of test builds on multiple executors which all run correctly.
I now want to aggregate the results using 'Aggregate downstream test results->Automatically aggregate all downstream tests'. I have enabled this in the Build job and have set up fingerprinting so that these are recognised as downstream jobs. In the Build jobs lastBuild page I can see that they are recognised as downstream builds:
Downstream Builds
Test #1-#3
When I click on "Aggregated Test Results" however it only shows the latest of these (Test #3). This may be good behaviour if the job always runs the same tests but mine all run different parts of my test suite.
Is there some way I can get this to aggregate all of the relevant downstream Test builds?
Additional: Aggregated Test Results does work if you replicate the Test job. This is not ideal as I have a large number of test suites.
I'll outline the manual solution (as mentioned in the comments), and provide more details if you need them later:
Let P be the parent job and D be a downstream job (you can easily extend the approach to multiple downstream jobs).
If using Python (that's what I do) - utilize Python JenkinsAPI wrapper. If using Groovy - utilize Groovy Plugin and run your script as system script. You then can access Jenkins via its Java API.
I came up with the following solution using declarative pipelines.
It requires installation of "copy artifact" plugin.
In the downstream job, set "env" variable with the path (or pattern path) to result file:
post {
always {
steps {
script {
// Rem: Must be BEFORE execution that may fail
env.RESULT_FILE='Devices\\resultsA.xml'
}
xunit([GoogleTest(
pattern: env.RESULT_FILE,
)])
}
}
}
Note that I use xunit but the same apply with junit
In the parent job, save build variables, then in post process I aggregate results with following code:
def runs=[]
pipeline {
agent any
stages {
stage('Tests') {
parallel {
stage('test A') {
steps {
script {
runs << build(job: "test A", propagate: false)
}
}
}
stage('test B') {
steps {
script {
runs << build(job: "test B", propagate: false)
}
}
}
}
}
}
post {
always {
script {
currentBuild.result = 'SUCCESS'
def result_files = []
runs.each {
if (it.result != 'SUCCESS') {
currentBuild.result = it.result
}
copyArtifacts(
filter: it.buildVariables.RESULT_FILE,
fingerprintArtifacts: true,
projectName: it.getProjectName(),
selector: specific(it.getNumber().toString())
)
result_files << it.buildVariables.RESULT_FILE
}
env.RESULT_FILE = result_files.join(',')
println('Results aggregated from ' + env.RESULT_FILE)
}
archiveArtifacts env.RESULT_FILE
xunit([GoogleTest(
pattern: env.RESULT_FILE,
)])
}
}
}
Note that the parent job also set the "env" variable so it can itself be aggregated by a parent job.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With