I am trying to upgrade my current regression infrastructure to use pipeline plugin and I realize there are two methods: scripted pipeline
and declarative pipeline
. Going through multiple articles, I realize that declarative pipeline
is more future proof and more powerful, hence I am inclined to use this. But there seems to be following restrictions which I don't want to have in my setup:
The jenkinsfile
needs to be in the repository. I don't want to keep my jenkinsfile
in the code repository.
Since the jenkinsfile
needs to be in SCM. Does it mean I cannot test any modification in the file until I check that in to the repository.
Any details on the above will be very helpful.
Declarative pipelines are compiled to scripted ones, so those will definitely not go away. But declarative ones are a bit easier to handle, so all fine for you.
You don't have to check a Jenkinsfile
into VCS. You can also set up a job of type Pipeline and define it there. But this has the usual disadvantages like no history etc.
When using multi-branch pipelines, i.e., every branch containing a Jenkinsfile
generating an own job, you just push your changed pipeline to a new branch and execute it. Once it's done, you merge it.
This approach certainly increases feedback cycles a bit, but it just applies the same principles as when writing your software. For experimentation, just set up a Pipeline type job and play around. Afterwards, commit it to a branch, test it, review it, merge it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With