We're considering using the Jenkins Pipeline plugin for a rather complex project consisting of several deliveries that need to be build using different tools (on different machines) before being merged. Still, it seems to be easy enough to do a complete build with a single Jenkinsfile
, and I like the automatic discovery of git branches that comes with Pipeline.
However, at this point, we have jobs for each of the deliveries and use a build-flow based "meta" job to orchestrate the individual jobs. The nice thing about this is that it also allows starting just one individual job if only small changes were made, just to see whether this delivery still compiles.
To emulate this, some ideas came to mind:
Jenkinsfile
s for the deliveries and load
them in the top-level Jenkinsfile
; it seems that the Multibranch Pipeline job does not allow configuring the Jenkinsfile
to use yet (https://issues.jenkins-ci.org/browse/JENKINS-35415), however, so creating the jobs for the individual deliveries is still open.if
s for all deliveries in the Jenkinsfile
to be able to select which should be build. This would mix different build types in one pipeline, though, and, at the very least, mess up the estimation of the build time.Are those viable options, or is there a better one?
Each job is an individual service a client pays for. A job goes into a pipeline specifically designed for the service it requires. For example, all of your clients' 1040 returns for 2021 will go into your 1040 Return Pipeline.
Select a job that triggers a remote one and then go to Job Configuration > Build section > Add Build Step > Trigger builds on remote/local projects option. This configuration allows you to trigger another exciting job on a different CM (remote). The downstream job name part will autocomplete.
What you could do is to write a pipelining script that has has "if"-guards around the single stages, like this:
stage "s1"
if (theStage in ["s1","all"]) {
sleep 2
}
stage "s2"
if (theStage in ["s2", "all"]) {
sleep 2
}
stage "s3"
if (theStage in ["s3", "all"]) {
sleep 2
}
Then you can make a "main" job that uses this script and runs all stages at once by setting the parameter "theStage" to "all". This job will collect the statistics when all stages are run at once and give you useful estimation times.
Furthermore, you can make a "partial run" job that uses this script and that is parametrized with the stage that you want to run. The estimation will not be very useful, though.
Note that I put the stage itself to the main script and put only the execution code into the conditional, as suggested by Martin Ba. This makes sure that the visualization of the job is more reliable
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With