In gitlab-ci
there's an option in the .gitlab-ci.yml
file to execute commands before any of the actual script runs, called before_script
. .gitlab-ci.yml
examples illustrate installing ancillary programs here. However, what I've noticed is that these changes are not cached in Docker when using a docker executor. I had naively assumed that after running these commands, docker would cache the image, so for the next run or test, docker would just load the cached image produced after before_script
. This would drastically speed up builds.
As an example, my .gitlab-ci.yml
looks a little like:
image: ubuntu
before_script:
- apt-get update -qq && apt-get install -yqq make ...
build:
script:
- cd project && make
A possible solution is to go to the runner machine and create a docker image that can build my software without any other installation and then reference it in the image
section of the yaml file. The downside of this is that whenever I want to add a dependency, I need to log in to the runner machine and update the image before builds will succeed. It would be much nicer if I just had to add the dependency to to the end of apt-get install
and have docker / gitlab-ci handle the appropriate caching.
There is also a cache
command in .gitlab-ci.yml
, which I tried setting to untracked: true
, which I thought would cache everything that wasn't a byproduct of my project, but it didn't seem to have any effect.
Is there any way to get the behavior I desire?
Caching on Gitlab Runner CI The GitLab CI runners can save artifacts and use it throughout the pipeline. This can help in speeding up the build time. By default, artifacts have an expiry time of 30 days unless specified otherwise.
By default, they are stored locally in the machine where the Runner is installed and depends on the type of the executor. Locally, stored under the gitlab-runner user's home directory: /home/gitlab-runner/cache/<user>/<project>/<cache-key>/cache.
Disabling caching You can do so by passing two arguments to docker build : --pull : This pulls the latest version of the base Docker image, instead of using the locally cached one. --no-cache : This ensures all additional layers in the Dockerfile get rebuilt from scratch, instead of relying on the layer cache.
With GitLab, you can add a job to your pipeline to build Docker images, and push them to the built-in container registry. Here is how... Prerequisites: for this to work, you will need a gitlab-runner with docker-in-docker configured, and a working Dockerfile.
You can add a stage to build the image in first place. If the image doesn't have any change, the stage will be very short, under 1 second.
You can use that image on the following stages, speeding up the whole process.
This is an example of a .gitlab-ci.yml
:
stages:
- build_test_image
- test
build_test:
stage: build_test_image
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:test -f dockerfiles/test/Dockerfile .
- docker push $CI_REGISTRY_IMAGE:test
tags:
- docker_build
test_syntax:
image: $CI_REGISTRY_IMAGE:test
stage: test
script:
- pip install flake8
- flake8 --ignore=E501,E265 app/
Look at the tag docker_build
. That tag is used to force the execution of the stage on the runner which has that tag. The executor for that runner is shell
, and it's used only to build Docker images. So, the host where the runner lives should have installed Docker Engine. I found this solution suits better my needs than docker in docker and another solutions.
Also, I'm using a private registry, that's why I'm using $CI_REGISTRY*
variables, but you can use DockerHub without need to specify the registry. The problem would be to authenticate on DockerHub, though.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With