Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to deal with Docker container dependencies properly?

I have just started to learn about Docker and consider replacing my VM-based infrastructure by a Docker infrastructure. I am wondering about how to deal with dependencies between containers and how to decide when/if a restart of a dependent container is necessary and if so, how to minimize downtime.

To get more precise, I discovered tools such as fig or decking to manage containers and dependencies, so (if I am lucky) I get a directed acyclic graph that tells me in which order to start up or take down containers. For example, the mongodb container must start before the webserver container etc.

So if I update MongoDB or change some setting, I guess I should shut down the webserver as well, because it can't deal very well with the database not being present. In this case, how can I minimize the downtime incurred by shutting down and restarting the containers, including redeployment of the Jetty webapp etc.?

But then, if I just update my SMTP server (that more or less all other containers depend on), I do not want this to trigger a restart of my whole container infrastructure. So, after restart of the mailserver container, will the other containers still be able to reach the previously linked ports?

How do you deal with that situation? Do I need/is it possible to add an ambassador container ABC_amb for every container ABC that will never go down and hold connections while ABC is restarting etc.?

like image 885
tgpfeiffer Avatar asked Jul 30 '14 02:07

tgpfeiffer


1 Answers

So I think what I will do is first, split dependencies between containers in "hard" and "soft".

"Hard dependency" means that B depends so much on A that if A is restarted, B must be restarted as well. (Maybe because there is a network connection that depends on B's state at boot time.) In that case, I will restart containers in a dependency-respecting way: Shut down B, then A, then start A, and finally B. That is what fig and decking can do very well.

"Soft dependency" means that B uses services from A, but not so much that a restart of B is required if A restarts. (Typical use case is a web proxy on B for a web app on A.) In that case, I will only restart A and keep B running.

For soft dependencies, I cannot use the Docker --link parameter, though, because after reboot of A, A's DNS name known to B would point to nowhere (IP addresses change at container restart). Therefore, I will be using serf to register and unregister A after boot/before shutdown, and will use a serf event handler to trigger configuration change on B, i.e., update A's IP address in configuration files and reload services. (This blog post gives an introduction to how this works, but beware that their setting is different from mine.)

However, in order to not having to do this on every single host, I will use a serf-enabled HAproxy server that functions as an ambassador between A and B. B will be linked to this proxy using --link, so that the software running on B does not need to know anything about serf, but can instead rely on DNS to connect to the ambassador, which will proxy the connection to A.

  A (webapp)   <--[soft]--  A_ambassador (haproxy)  <--[hard]--  B (nginx)

That seems like a feasible approach to keep a container running while a (soft) dependent container can restart. A nice side effect is that (if the event handler scripts are written well), HAproxy can work as an actual load balancer if multiple A instances exist.

Open issues:

  • How can HAproxy hold connections while the proxied service is down?
  • In some cases, B will also have to restart (say, the password needed to connect to A changed). Or, A_ambassador and B have to restart (say, the port used by A changed). How to detect these cases and deal with them appropriately?
  • Is the overhead of adding one additional HAproxy instance per service negligible? Is there a more lightweight solution available?
like image 59
tgpfeiffer Avatar answered Sep 21 '22 00:09

tgpfeiffer