Here's my situation:
Now I want to change some code and redeploy my changes to those 3 servers. I can think of 3 possible ways to facilitate the automation of this:
Number 1 seems like the easiest but most other discussion I've read on Dockers leans towards a situation like 3 which seems rather long-winded to me.
What is the best option here (or not here), I'm new to Docker so have I missed something? I asked someone who knows about Docker and their response was 'you're not thinking in the Docker way', so what's the Docker way?
It's ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.
The docker-compose. yml file allows you to configure and document all your application's service dependencies (other services, cache, databases, queues, etc.). Using the docker-compose CLI command, you can create and start one or more containers for each dependency with a single command (docker-compose up).
You can create multiple networks with Docker and add containers to one or more networks. Containers can communicate within networks but not across networks. A container with attachments to multiple networks can connect with all of the containers on all of those networks.
I think the idea for option 3, is that you're building the image only once, which means that all servers would run the same image. The other two may produce different images.
E.g. in a slightly more involved scenario, the three builds can even pick different commits if you go with option 1.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With