I'm using the declarative pipeline syntax to do some CI work inside a docker container.
I've noticed that the Docker plugin for Jenkins runs a container using the user id and group id of the jenkins user in the host (ie if the jenkins user has user id 100 and group id 111 it would run the pipeline creating a container with the command docker run -u 100:111 ...
).
I had some problems with this, as the container will run with a non existing user (particularly I ran into some issues with the user not having a home dir). So I thought of creating a Dockerfile that will receive the user id and group id as build arguments and creater a proper jenkins user inside the container. The Dockerfile looks like this:
FROM ubuntu:trusty ARG user_id ARG group_id # Add jenkins user RUN groupadd -g ${group_id} jenkins RUN useradd jenkins -u ${user_id} -g jenkins --shell /bin/bash --create-home USER jenkins ...
The dockerfile agent has an additionalBuildArgs
property, so I can read the user id and group id of the jenkins user in the host and send those as build aguments, but the problem I have now is that it seems that there is no way of executing those commands in a declarative pipeline before specifying the agent. I want my Jenkinsfile to be something like this:
// THIS WON'T WORK def user_id = sh(returnStdout: true, script: 'id -u').trim() def group_id = sh(returnStdout: true, script: 'id -g').trim() pipeline { agent { dockerfile { additionalBuildArgs "--build-arg user_id=${user_id} --build-arg group_id=${group_id}" } } stages { stage('Foo') { steps { ... } } stage('Bar') { steps { ... } } stage('Baz') { steps { .. } } ... } }
I there is any way to achieve this? I've also tried wrapping the pipeline directive inside a node, but the pipeline needs to be at the root of the file.
If you want to pass multiple build arguments with docker build command you have to pass each argument with separate — build-arg. docker build -t <image-name>:<tag> --build-arg <key1>=<value1> --build-arg <key2>=<value2> .
I verified that trying to assign user_id and group_id without a node didn't work, as you found, but this worked for me to assign these values and later access them:
def user_id def group_id node { user_id = sh(returnStdout: true, script: 'id -u').trim() group_id = sh(returnStdout: true, script: 'id -g').trim() } pipeline { agent { label 'docker' } stages { stage('commit_stage') { steps { echo 'user_id' echo user_id echo 'group_id' echo group_id } } } }
Hopefully these will also work in your additionalBuildArgs
statement.
In a comment, you pointed out what is most likely a critical flaw with the approach that figures out the user_id and group_id outside the declarative pipeline before using it to configure the dockerfile: the slave on which it discovers the user_id will not necessarily match up with the slave that it uses to kick off the docker-based build. i don't there is any way around this while also keeping the declarative Jenkinsfile constraint.
You can guarantee one slave for all stages by using a global agent declaration: Jenkins declarative pipeline: What workspace is associated with a stage when the agent is set only for the pipeline?
But multiple node references with the same label don't guarantee the same workspace: Jenkins declarative pipeline: What workspace is associated with a stage when the agent is set only for the pipeline?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With