Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Docker on several computers

Tags:

docker

cloud

For a study I deployed on my computer a cloud architecture using Docker. (Nginx for the load balancing an some Apache servers to run a simple Php application.

I wanted to know if it was possible to use several computers to deploy my containers in order to increase the power available.

(I'm using a MacBook Pro with Yosemite. I've installed boot2docker with Virtual box)

like image 361
Tom Giraudet Avatar asked Mar 11 '26 21:03

Tom Giraudet


1 Answers

Disclosure: I was a maintainer on Swarm Legacy and Swarm mode

Edit: This answer mentions Docker Swarm Legacy, the first version of Docker Swarm. Since then a new version called Swarm mode was directly included in the docker engine and behaves a bit differently in terms of topology and features even though the big ideas remain.

Yes you can deploy Docker on multiple machines and manage them altogether as a single pool of resources. There are several solutions that you can use to orchestrate your containers on multiple machines using docker.

You can use either Docker Swarm, Kubernetes, Mesos/Marathon or Fleet. (there might be others as this is a fast-moving area). There are also commercial solutions like Amazon ECS.

In the case of Swarm, it uses the Docker remote API to communicate with distant docker daemons and schedule containers according to the load or some extra constraints (other systems are similar with more or less features). Here is an example of a small Swarm deployments.

                                Docker CLI
                                    +   
                                    |     
                                    |        
                                    | 4000 (or else)    
                                    | server
                           +--------v---------+   
                           |                  |          
              +------------>   Swarm Manager  <------------+     
              |            |                  |            |    
              |            +--------^---------+            |  
              |                     |                      |     
              |                     |                      |  
              |                     |                      |     
              |                     |                      |  
              | client              | client               | client  
              | 2376                | 2376                 | 2376   
              |                     |                      |      
    +---------v-------+    +--------v--------+    +--------v--------+     
    |                 |    |                 |    |                 |    
    |   Swarm Agent   |    |   Swarm Agent   |    |   Swarm Agent   |    
    |     Docker      |    |     Docker      |    |     Docker      |       
    |     Daemon      |    |     Daemon      |    |     Daemon      |  
    |                 |    |                 |    |                 |          
    +-----------------+    +-----------------+    +-----------------+

Choosing one of those systems is basically a choice between:

  • Cluster deployment simplicity and maintenance
  • Flexibility of the scheduler
  • Completeness of the API
  • Support for running VMs
  • Higher abstraction for groups of containers: Pods
  • Networking model (Bridge/Host or Overlay or Flat network)
  • Compatibility with the Docker remote API

It depends mostly on the use case and what kinds of workload you are running. For more details on the differences between those systems, see this answer.

like image 86
abronan Avatar answered Mar 14 '26 13:03

abronan



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!