Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Docker in Docker - volumes not working: Full of files in 1st level container, empty in 2nd tier

I am running Docker in Docker (specifically to run Jenkins which then runs Docker builder containers to build a project images and then runs these and then the test containers).

This is how the jenkins image is built and started:

docker build --tag bb/ci-jenkins .
mkdir $PWD/volumes/
docker run -d --network=host  \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v /usr/bin/docker:/usr/bin/docker \
  -v $PWD/volumes/jenkins_home:/var/jenkins_home \
  --name ci-jenkins bb/ci-jenkins

Jenkins works fine. But then there is a Jenkinsfile based job, which runs this:

docker run -i --rm -v /var/jenkins_home/workspace/forkMV_jenkins-VOLTRON-3057-KQXKVJNXOU4DGSUG3P27IR3QEDHJ6K7HPDEZYN7W6HCOTCH3QO3Q:/tmp/build collab/collab-services-api-mvn-builder:2a074614 mvn -B -T 2C install

And this ends up with an error:

The goal you specified requires a project to execute but there is no POM in this directory (/tmp/build).

When I do docker exec -it sh to the container, the /tmp/build is empty. But when I am in the Jenkins container, the path /var/jenkins_home/...QO3Q/ exists and it contains the workspace with all the files checked out and prepared.

So I wonder - how can Docker happily mount the volume and then it's empty?*

What's even more confusing, this setup works for my colleague on Mac. I am on Linux, Ubuntu 17.10, Docker latest.

like image 683
Ondra Žižka Avatar asked Mar 16 '18 02:03

Ondra Žižka


People also ask

Can two Docker containers use the same volume?

Multiple containers can run with the same volume when they need access to shared data. Docker creates a local volume by default.

How do I clean up Docker volumes?

Prune everything The docker system prune command is a shortcut that prunes images, containers, and networks. Volumes are not pruned by default, and you must specify the --volumes flag for docker system prune to prune volumes. By default, you are prompted to continue. To bypass the prompt, use the -f or --force flag.

What are the two types of Docker volumes?

There are two types of volumes to consider: Named volumes have a specific source from outside the container, for example awesome:/bar . Anonymous volumes have no specific source so when the container is deleted, instruct the Docker Engine daemon to remove them.


3 Answers

After some research, calming down and thinking, I realized that Docker-in-Docker is not really so much "-in-", as it is rather "Docker-next-to-Docker".

The trick to make a container able to run another container is sharing /var/run/docker.sock through a volume: -v /var/run/docker.sock:/var/run/docker.sock

And then the docker client in the container actually calls Docker on the host.

The volume source path (left of :) does not refer to the middle container, but to the host filesystem!

After realizing that, the fix is to make the paths to the Jenkins workspace directory the same in the host filesystem and the Jenkins (middle) container:

docker run -d --network=host  \
   ...
   -v /var/jenkins_home:/var/jenkins_home

And voilá! It works. (I created a symlink instead of moving it, seems to work too.)

It is a bit complicated if you're looking at colleague's Mac, because Docker is implemented a bit differently there - it is running in an Alpine Linux based VM but pretending not to. (Not 100 % sure about that.) On Windows, I read that the paths have another layer of abstraction - mapping from C:/somewhere/... to a Linux-like path.

I hope I'll save someone hours of figuring out :)

like image 151
Ondra Žižka Avatar answered Oct 21 '22 04:10

Ondra Žižka


Alternative Solution with Docker cp

I was facing the same problem of mounting volumes from a Build that runs in a Docker Container running in a Jenkins server in Kubernetes. As we use docker-in-docker, dind, I couldn't mount the volume in either ways proposed here. I'm still not sure what the reason is, but I found an alternative way: use docker cp to copy the build artifacts.

enter image description here

Multi-stage Docker Image for Tests

I'm using the following Dockerfile stage for Unit + Integration tests.

#
# Build stage to for building the Jar
#
FROM maven:3.2.5-jdk-8 as builder
MAINTAINER [email protected]

# Only copy the necessary to pull only the dependencies from registry
ADD ./pom.xml /opt/build/pom.xml
# As some entries in pom.xml refers to the settings, let's keep it same
ADD ./settings.xml /opt/build/settings.xml

WORKDIR  /opt/build/

# Prepare by downloading dependencies
RUN mvn -s settings.xml -B -e -C -T 1C org.apache.maven.plugins:maven-dependency-plugin:3.0.2:go-offline

# Run the full packaging after copying the source
ADD ./src /opt/build/src
RUN mvn -s settings.xml install -P embedded -Dmaven.test.skip=true -B -e -o -T 1C verify

# Building only this stage can be done with the --target builder switch
# 1. Build: docker build -t config-builder --target builder .
# When running this first stage image, just verify the unit tests
# Overriden them by removing the "!" for integration tests
# 2. docker run --rm -ti config-builder mvn -s settings.xml -Dtest="*IT,*IntegrationTest" test
CMD mvn -s settings.xml -Dtest="!*IT,!*IntegrationTest" -P jacoco test

Jenkins Pipeline For tests

  • My Jenkins pipeline has a stage for running parallel tests (Unit + Integration).
  • What I do is to build the Test Image in a stage, and run the tests in parallel.
  • I use docker cp to copy the build artifacts from inside the test docker container that can be started after running the tests in a named container.
    • Alternatively, you can use Jenkins stash to carry the test results to a Post stage

At this point, I solved the problem with a docker run --name test:SHA and then I use docker start test:SHA and then docker cp test:SHA:/path ., where . is the current workspace directory, which is similar to what we need with a docker volume mounted to the current directory.

stage('Build Test Image') {
  steps {
    script {
      currentBuild.displayName = "Test Image"
      currentBuild.description = "Building the docker image for running the test cases"
    }
    echo "Building docker image for tests from build stage ${env.GIT_COMMIT}"
    sh "docker build -t tests:${env.GIT_COMMIT} -f ${paas.build.docker.dockerfile.runtime} --target builder ."
  }
}

stage('Tests Execution') {
  parallel {
    stage('Execute Unit Tests') {
      steps {
        script {
          currentBuild.displayName = "Unit Tests"
          currentBuild.description = "Running the unit tests cases"
        }
        sh "docker run --name tests-${env.GIT_COMMIT} tests:${env.GIT_COMMIT}"
        sh "docker start tests-${env.GIT_COMMIT}"
        sh "docker cp tests-${env.GIT_COMMIT}:/opt/build/target ."

        // https://jenkins.io/doc/book/pipeline/jenkinsfile/#advanced-scripted-pipeline#using-multiple-agents
        stash includes: '**/target/*', name: 'build'
      }
    }
    stage('Execute Integration Tests') {
      when {
        expression { paas.integrationEnabled == true }
      }
      steps {
        script {
          currentBuild.displayName = "Integration Tests"
          currentBuild.description = "Running the Integration tests cases"
        }
        sh "docker run --rm tests:${env.GIT_COMMIT} mvn -s settings.xml -Dtest=\"*IT,*IntegrationTest\" -P jacoco test"
      }
    }
  }
}
like image 31
Marcello de Sales Avatar answered Oct 21 '22 03:10

Marcello de Sales


A better approach is to use Jenkins Docker plugin and let it do all the mountings for you and just add -v /var/run/docker.sock:/var/run/docker.sock in its inside function arguments.

E.g.

docker.build("bb/ci-jenkins")
docker.image("bb/ci-jenkins").inside('-v /var/run/docker.sock:/var/run/docker.sock')

{
 ...
}
like image 35
yorammi Avatar answered Oct 21 '22 04:10

yorammi