As part of my buildpipeline, I have a container containing build-tools that is used for multiple projects. One of my project contains a build step to build and publish a container, which is done from within the build-tools container. My docker enabled jenkins-slaves are configured to have user jenkins
who is in group docker
. I used -v to mount the docker binary and scoket. This can be achieved/reproduced by either:
The issue with the first strategy is that the user and group id are different on the multiple build machines. I could fix this by chaning UID and GID of all build machines to the same values, but wasn't docker meant to run in isolation without having many dependencies on the environment/context? This does not feel like the right solution to me.
The second strategy works perfectly fine on commandline, however, there seems to be no way of passing the UID and GID to the agent command in Jenkinsfile. the args
parameter does not support scripts or variables, like $(id -u)
.
I expected not to be the first facing this issue, however, I was not able to find a solution to this by myself, search machines and stack overflow. Should I go with 'prepped' build slaves or is there a way to get the second strategy working?
.
-edit-
I understand the options to run the container as root, and switch after starting (e.g. using entrypoint). However, that would require my Jenkins slave to be connected as root, something that is unacceptable for me. Another found alternative is the chmod777 of all resources, which fully defies the security aspect of not running a Jenkins slave as root user. I would prefer to use the -u option to containers, but I can't find a way to determine the UID and GID on a jenkins slave before starting up the docker agent (docker run
command) from within the Jenkinsfile.
Actually, I believe your first idea for a solution can be achieved easily with docker and without the need to run any Jenkins slave as root.
Consider this command:
docker run --rm -it -v /etc/passwd:/etc/passwd:ro -v /etc/shadow:/etc/shadow:ro -v /etc/group:/etc/group:ro debian:10 /bin/su linux-fan -c /bin/bash
This creates a new container and maps the users from the host into the container. Then, inside that container it drops immediately to user linux-fan
which (only) needs to be defined on the outward system.
Whether you run this command as root
or as any user in the docker
group does not make a difference (note that the comments are very right about docker
group = root access!)
Also, mapping things inside the container this way (already when doing so with the docker socket...) is really giving up most of the isolation that a container provides. It would thus be sensible to consider running whichever command requires access to the host's Docker daemon to run directly on the host or in a less-isolated environment like a chroot
? Of course, the simplicity of invoking Docker may still outweigh the lack of isolation here.
A solution without host-access could easily work around this: Using docker-in-docker i.e. running a new docker daemon inside the build container instead of accessing the host isolates them from each other such that the host's user and group IDs do not matter.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With