Is it possible to cache docker images on Travis CI? Attempting to cache the /var/lib/docker/aufs/diff
folder and /var/lib/docker/repositories-aufs
file with cache.directories in the travis.yml doesn't seem to work since they require root.
Docker uses a layer cache to optimize and speed up the process of building Docker images. Docker Layer Caching mainly works on the RUN , COPY and ADD commands, which will be explained in more detail next.
If you use the default storage driver overlay2, then your Docker images are stored in /var/lib/docker/overlay2 . There, you can find different files that represent read-only layers of a Docker image and a layer on top of it that contains your changes.
Travis CI builds can run and build Docker images, and can also push images to Docker repositories or other remote storage.
Docker's build-cache is a handy feature. It speeds up Docker builds due to reusing previously created layers. You can use the --no-cache option to disable caching or use a custom Docker build argument to enforce rebuilding from a certain step.
From a Docker perspective, I think the best way you could do this (without the possibility of running a network local registry) is save
the Docker image and cache the exported tar ball. You would need to load
that at the start rather than pull
an image. This way you're not messing with docker storage implementations.
install:
- docker pull busybox
- docker save busybox | gzip > docker/busybox.tar.gz
cache:
directories:
- docker
You would then need to load
the cached image before your Travis run.
before_script:
- gzip -dc docker/busybox.tar.gz | docker load
The bit I'm not clear on for Travis, is if you need to stop it from running the install
step after the first time. You don't want Travis pulling and exporting the image each time once it's cached. I'm not sure if having the cache
directive automatically does that for you?
The main question then is whether this is actually going to be any quicker than pulling the image or not:
The caching tars up all the directories listed in the configuration and uploads them to S3, using a secure and protected URL, ensuring security and privacy of the uploaded archives.
Note that this makes our cache not network-local, it’s still bound to network bandwidth and DNS resolutions for S3. That impacts what you can and should store in the cache. If you store archives larger than a few hundred megabytes in the cache, it’s unlikely that you’ll see a big speed improvement.
You might just be adding overhead. As the Docker registry is backed by Cloudfront, Travis is already pulling compressed images from local, or at least close Amazon infrastructure. Maybe ask them for the feature to cache Docker images natively, similar to what they do for apt
packages, although it doesn't sound hopeful.
Have a look at what circleci recommends: https://circleci.com/docs/docker/#caching-docker-layers.
It should be easy to combine docker save
/docker load
with the directory caching provided by travis.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With