Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Docker Swarm - Deploying stack with shared code base across hosts

Tags:

I have a question related with the best practices for deploying applications to the production based on the docker swarm.

In order to simplify discussion related with this question/issue lets consider following scenario: enter image description here

Our swarm contains:

  • 6 servers (different hosts)
  • on each of these servers, we will have one service
  • each service will have only one task/replica docker running
  • Memcached1 and Memcached2 uses public images from docker hub
  • "Recycle data 1" and "Recycle data 2" uses custom image from private repository
  • "Client 1" and "Client 2" uses custom image from private repository

So at the end, for our example application, we have 6 dockers running across 6 different servers. 2 dockers are memcached, and 4 of them are clients which are communicating with memcached. "Client 1" and "Client 2" are going to insert data in the memcached based on the some kind of rules. "Recycle data 1" and "Recycle data 2" are going to update or delete data from memcached based on some kind of rules. Simple as that.

Our applications which are communicating with memcached are custom ones, and they are written by us. The code for these application reside on github (or any other repository). What is the best way to deploy this application to the production:

  1. Build images which will contain copied code within the image which you can use to deploy things to the swarm
  2. Build image which will use volume where code reside outside of the image.

Having in mind that I am deploying swarm to the production for the first time, I can see a lot of issues with way number 1. Having a code incorporate to the images seems non logical to me, having in mind that in 99% of the time, the updates which are going to happen are going to be code based. This will require building image every time when you want to update the code which runs on specific docker (no matter how small that change is).

Way number 2. seems much more logical to me. But at this specific moment I am not sure is this possible? So there are a number of questions here:

  1. What is the best approach in case where we are going to host multiple dockers which will run the same code in the background?
  2. Is it possible on docker swarm, to have one central host,server (manager, anywhere) where we can clone our repositories and share those repositores as volumes across the docker swarm? (in our example, all 4 customer services will mount volume where we have our code hosted)
  3. If this is possible, what is the docker-compose.yml implementation for it?
like image 271
cool Avatar asked Oct 04 '17 15:10

cool


People also ask

What is used to deploy stacks on Docker Swarm?

When running Docker Engine in swarm mode, you can use docker stack deploy to deploy a complete application stack to the swarm. The deploy command accepts a stack description in the form of a Compose file. The docker stack deploy command supports any Compose file of version “3.0” or above.

Which two types of nodes can you deploy in a Docker Swarm?

A swarm consists of one or more nodes: physical or virtual machines running Docker Engine 1.12 or later in swarm mode. There are two types of nodes: managers and workers.

Is Docker swarm being deprecated?

Docker Swarm is not being deprecated, and is still a viable method for Docker multi-host orchestration, but Docker Swarm Mode (which uses the Swarmkit libraries under the hood) is the recommended way to begin a new Docker project where orchestration over multiple hosts is required.

What is overlay network in Docker Swarm?

The overlay network driver creates a distributed network among multiple Docker daemon hosts. This network sits on top of (overlays) the host-specific networks, allowing containers connected to it (including swarm service containers) to communicate securely when encryption is enabled.


1 Answers

After digging more deeper and working with docker and docker swarm mode for last 3 months, these are the answers on questions above:


Answer 1: In general, you should consider your docker image as "compiled" version of your program. Your image should contain either code base, or compiled version of the program (depends which programming language you are using), and that specific image represents your version of the app. Every single time when you want to deploy your next version, you will generate the new image.

This is probably best approach for 99% of the apps which are going to be hosted with the docker (exceptions are development environments and apps where you really want to bash and control things directly from the docker container by itself).


Answer 2: It is possible but it is extremely bad approach. As mentioned in answer one, the best one is to copy the app code directly into the image and "consider" your image (running container) as "app by itself".

I was not able to wrap my head around this concept at the begging, because this concept will not allow you to simply go to the server (or where ever you are hosting your docker) and change the app and restart docker (obviously because container will be at the same beginning again after restart using the same image, same base of code you deployed with that image). Any kind of change SHOULD and NEEDS to be deployed as different image with different version. That is what docker is all about.

Additionally, initial idea for sharing same code base across multiple swarm services is possible, but it totally ruins purpose of the versioning across docker swarm.

Consider having 3 services which are used as redundant services (failover), and you want to use new version on one of them as beta test. This will not be possible with the shared code base.

like image 85
cool Avatar answered Nov 14 '22 23:11

cool