I'm rebuilding an existing build pipeline as a jenkins declarative pipeline (multi-branch-pipeline) and have a problem handling build propagation.
After packaging and stashing all relevant files the pipeline is supposed to wait for user input to trigger deployment.
If i just add an input step the current build-node is blocked. As this executor is pretty heavy i would like to move this step to a more lightweight machine.
Initially i did the job as a scripted pipeline and just created two different node('label')
blocks. is there a way for me to do something similar with the declarative syntax?
node('spine') { stage('builder') { sh 'mvn clean compile' stash name: 'artifact', includes: 'target/*.war' } } node('lightweight') { stage('wait') { timeout(time:5, unit:'DAYS') { input message:'Approve deployment?' } } // add deployment stages }
I tried a couple of things already:
configuring the agent on the top-level and adding an additional agent config to the propagation step, but then i have two executors blocking as the top-level defined build-node is not stopped.
Setting agent none
on top-level and configuring the agents per step. then the git checkout is not present on the first node.
EDIT 1
i reconfigured my pipeline following you advice, it currently looks like this:
pipeline { agent none tools { maven 'M3' } stages { stage('Build') { agent { label 'spine' } steps { checkout scm // needed, otherwise the workspace on the first step is empty sh "mvn clean compile" } } stage('Test') { agent { label 'spine' } steps { sh "mvn verify" // fails because the workspace is empty aggain junit '**/target/surefire-reports/TEST-*.xml' } } } }
this build will fail because the workspace does not carry over between steps as they dont run on the same executor.
EDIT 2
apparently sometimes the steps run on the same executor and sometimes don't. (we spawn build slaves on our mesos/dcos cluster on demand, so changing the executor mid build would be a problem)
I expected jenkins to just run with the current executor as long as the label in the agent definition does not change.
The top-level of the Pipeline must be a block, specifically: pipeline { } No semicolons as statement separators. Each statement has to be on its own line. Blocks must only consist of declarative sections, declarative directives, declarative steps, or assignment statements.
Basically, declarative and scripted pipelines differ in terms of the programmatic approach. One uses a declarative programming model and the second uses an imperative programming mode. Declarative pipelines break down stages into multiple steps, while in scripted pipelines there is no need for this.
Yes you can only if you want to have external function inside step block.
Declarative Pipeline fundamentals In Declarative Pipeline syntax, the pipeline block defines all the work done throughout your entire Pipeline. Execute this Pipeline or any of its stages, on any available agent. Defines the "Build" stage. Perform some steps related to the "Build" stage.
See best practice 7: Don’t: Use input within a node block. In a declarative pipeline, the node selection is done through the agent
directive.
The documentation here describes how you can define none
for the pipline and then use a stage-level agent
directive to run the stages on the required nodes. I tried the opposite too (define a global agent on some node and then define none
on stage-level for the input), but that doesn't work. If the pipeline allocated a slave, you can't release the slave for one or more specific stages.
This is the structure of our pipeline:
pipeline { agent none stages { stage('Build') { agent { label 'yona' } steps { ... } } stage('Decide tag on Docker Hub') { agent none steps { script { env.TAG_ON_DOCKER_HUB = input message: 'User input required', parameters: [choice(name: 'Tag on Docker Hub', choices: 'no\nyes', description: 'Choose "yes" if you want to deploy this build')] } } } stage('Tag on Docker Hub') { agent { label 'yona' } when { environment name: 'TAG_ON_DOCKER_HUB', value: 'yes' } steps { ... } } } }
Generally, the build stages execute on a build slave labeled "yona", but the input stage runs on the master.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With