Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Aggregating results of downstream parameterised jobs in Jenkins

Tags:

jenkins

I have a Jenkins Build job which triggers multiple Test jobs with the test name as a parameter using the Jenkins Parameterized Trigger Plugin. This kicks off a number of test builds on multiple executors which all run correctly.

I now want to aggregate the results using 'Aggregate downstream test results->Automatically aggregate all downstream tests'. I have enabled this in the Build job and have set up fingerprinting so that these are recognised as downstream jobs. In the Build jobs lastBuild page I can see that they are recognised as downstream builds:

Downstream Builds

Test #1-#3

When I click on "Aggregated Test Results" however it only shows the latest of these (Test #3). This may be good behaviour if the job always runs the same tests but mine all run different parts of my test suite.

Is there some way I can get this to aggregate all of the relevant downstream Test builds?

Additional: Aggregated Test Results does work if you replicate the Test job. This is not ideal as I have a large number of test suites.

like image 859
Russell Gallop Avatar asked May 10 '12 12:05

Russell Gallop


2 Answers

I'll outline the manual solution (as mentioned in the comments), and provide more details if you need them later:

Let P be the parent job and D be a downstream job (you can easily extend the approach to multiple downstream jobs).

  1. An instance (build) of P invokes D via Parameterized Trigger Plugin via a build step (not as a post-build step) and waits for D's to finish. Along with other parameters, P passes to D a parameter - let's call it PARENT_ID - based on P's build's BUILD_ID.
  2. D executes the tests and archives them as artifacts (along with jUnit reports - if applicable).
  3. P then executes an external Python (or internal Groovy) script that finds the appropriate build of D via PARENT_ID (you iterate over builds of D and examine the value of PARENT_ID parameter). The script then copies the artifacts from D to P and P publishes them.

If using Python (that's what I do) - utilize Python JenkinsAPI wrapper. If using Groovy - utilize Groovy Plugin and run your script as system script. You then can access Jenkins via its Java API.

like image 78
malenkiy_scot Avatar answered Sep 29 '22 05:09

malenkiy_scot


I came up with the following solution using declarative pipelines.

It requires installation of "copy artifact" plugin.

In the downstream job, set "env" variable with the path (or pattern path) to result file:

post {
  always {
    steps {
      script {
        // Rem: Must be BEFORE execution that may fail   
        env.RESULT_FILE='Devices\\resultsA.xml'
      }
      xunit([GoogleTest(
        pattern: env.RESULT_FILE,
      )])
    }
  }
}

Note that I use xunit but the same apply with junit

In the parent job, save build variables, then in post process I aggregate results with following code:

def runs=[]

pipeline {
  agent any
  stages {
    stage('Tests') {
      parallel {
        stage('test A') {
          steps {
            script {
              runs << build(job: "test A", propagate: false)
            }
          }
        }
        stage('test B') {
          steps {
            script {
              runs << build(job: "test B", propagate: false)
            }
          }
        }
      }
    }
  }
  post {
    always {
      script {
        currentBuild.result = 'SUCCESS'
        def result_files = []
        runs.each {
          if (it.result != 'SUCCESS') {
            currentBuild.result = it.result
          }
          copyArtifacts(
            filter: it.buildVariables.RESULT_FILE,
            fingerprintArtifacts: true,
            projectName: it.getProjectName(),
            selector: specific(it.getNumber().toString())
          )
          result_files << it.buildVariables.RESULT_FILE
        }
        env.RESULT_FILE = result_files.join(',')
        println('Results aggregated from ' + env.RESULT_FILE)
      }
      archiveArtifacts env.RESULT_FILE
      xunit([GoogleTest(
        pattern: env.RESULT_FILE,
      )])
    }
  }
}

Note that the parent job also set the "env" variable so it can itself be aggregated by a parent job.

like image 30
A. Richard Avatar answered Sep 29 '22 05:09

A. Richard