Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it wrong to run a single process in docker without providing basic system services?

Tags:

docker

After reading the introduction of the phusion/baseimage I feel like creating containers from the Ubuntu image or any other official distro image and running a single application process inside the container is wrong.

The main reasons in short:

  • No proper init process (that handles zombie and orphaned processes)
  • No syslog service

Based on this facts, most of the official docker images available on docker hub seem to do things wrong. As an example, the MySQL image runs mysqld as the only process and does not provide any logging facilities other than messages written by mysqld to STDOUT and STDERR, accessible via docker logs.

Now the question arises which is the appropriate way to run an service inside docker container. Is it wrong to run only a single application process inside a docker container and not provide basic Linux system services like syslog? Does it depend on the type of service running inside the container?

like image 976
CodeZombie Avatar asked Sep 29 '14 13:09

CodeZombie


2 Answers

Check this discussion for a good read on this issue. Basically the official party line from Solomon Hykes and docker is that docker containers should be as close to single processes micro servers as possible. There may be many such servers on a single 'real' server. If a processes fails you should just launch a new docker container rather than try to setup initialization etc inside the containers. So if you are looking for the canonical best practices the answer is yeah no basic linux services. It also makes sense when you think in terms of many docker containers running on a single node, you really want them all to run their own versions of these services?

That being said the state of logging in the docker service is famously broken. Even Solomon Hykes the creator of docker admits its a work in progress. In addition you normally need a little more flexibility for a real world deployment. I normally mount my logs onto the host system using volumes and have a log rotate daemon etc running in the host vm. Similarly I either install sshd or leave an interactive shell open in the the container so I can issue minor commands without relaunching, at least until I am really sure my containers are air-tight and no more debugging will be needed.

Edit: With docker 1.3 and the exec command its no longer necessary to "leave an interactive shell open."

like image 189
Usman Ismail Avatar answered Oct 22 '22 02:10

Usman Ismail


It depends on the type of service you are running.

Docker allows you to "build, ship, and run any app, anywhere" (from the website). That tells me that if an "app" consists of/requires multiple services/processes, then those should be ran in a single Docker container. It would be a pain for a user to have to download, then run multiple Docker images just to run one application.

As a side note, breaking up your application into multiple images is subject to configuration drift.

I can see why you would want to limit a docker container to one process. One reason being uptime. When creating a Docker provisioning system, it's essential to keep the uptime of a container to a minimum so that scaling sideways is fast. This means, that if I can get away with running a single process per Docker container, then I should go for it. But that's not always possible.

To answer your question directly. No, it's not wrong to run a single process in docker.

HTH

like image 35
rexposadas Avatar answered Oct 22 '22 01:10

rexposadas