Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How best to use Docker in continuous delivery?

What is the best way to use Docker in a Continuous Delivery pipeline?

Should the build artefact be a Docker Image rather than a Jar/War? If so, how would that work - I'm struggling to work out how to seamlessly use Docker in development (on laptop) and then have the CI server use the same base image to build the artefact with.

like image 912
Zuriar Avatar asked Jan 12 '15 20:01

Zuriar


People also ask

How do you use Docker in CI CD pipeline?

How to implement a CI/CD pipeline in the codebase using a CircleCI config file in the project. Building a Docker image. Pushing the Docker image to Docker Hub. Kicking off a deployment script which will run the application in Docker container on a Digital Ocean server.

Does Docker have CI CD?

Optimizing CI/CD deploymentsYou can create new access tokens from your Security page on Docker Hub. Once you have created access tokens and have added it to a secrets store on your platform, you need to consider when to push and pull in your CI/CD, along with where from, depending on the change you are making.

What are the best practices for managing persistent data with Docker?

Volumes are the best way to persist data in Docker. Bind mounts may be stored anywhere on the host system. They may even be important system files or directories. Non-Docker processes on the Docker host or a Docker container can modify them at any time.

Which Docker features are used to enable CD?

Here's what you'll need: An AWS Account with an EC2 instance configured to expose ports 22, 80 and 443 and an SSH key for CircleCI to use. Source code of Dockerized App hosted on GitHub. Docker Hub (or some other Docker image repository you can publish to)


1 Answers

Well, there are of course multiple best practices and many approaches on how to do this. One approach that I have found successful is the following:

  • Separate the deployable code (jars/wars etc) from the docker containers in separate VCS-repos (we used two different Git-repos in my latest project). This means that the docker images that you use to deploy your code on is built in a separate step. A docker-build so to say. Here you can e.g. build the docker images for your database, application server, redis cache or similar. When a `Dockerfile´ or similar has changed in your VCS then Jenkins or whatever you use can trigger the build of the docker images. These images should be tagged and pushed to some registry (may it be Docker Hub or some local registry).
  • The deployable code (jars/wars etc) should be built as usual using Jenkins and commit-hooks. In one of my projects we actually ran Jenkins in a Docker container as described here.
  • All docker containers that uses dynamic data (such as the storage for a database, the war-files for a Tomcat/Jetty or configuration files that is part of the code base) should mount these files as data volumes or as data volume containers.
  • The test servers or whatever steps that are part of your pipeline should be set up according to a spec that is known by your build server. We used a descriptor that connected your newly built tag from the code base to the tag on the docker containers. Jenkins pipeline plugin could then run a script that first moved the build artifacts to the host server, pulled the correct docker images from the local registry and then started all processes using data volume containers (we used Fig for managing the docker lifecycle.

With this approach, we were also able to run our local processes (databases etc) as Docker containers. These containers were of course based on the same images as the ones in production and could also be developed on the dev machines. The only real difference between local dev environment and the production environment was the operating system. The dev machines typically ran Mac OS X/Boot2Docker and prod ran on Linux.

like image 199
wassgren Avatar answered Oct 06 '22 01:10

wassgren