I recently researched some best practises about Docker and came across different opinions on how or if to handle the init process.
As pointed out here, the init process should not be run at all. I can follow the thought that a container should model a single process and not the whole OS.
On the other hand as described here there can be problems if I just ignore the basic OS services like syslog.
As often there is maybe no absolute answer on how to handle these cases. Can you share some experiences or more insights about this topic? For me both approached seem legit.
The docker app init command is used to initialize a new Docker application project. If you run it on its own, it initializes a new empty project. If you point it to an existing docker-compose. yml file, it initializes a new project based on the Compose file.
Do Not Use Docker if You Prioritize Security. The greatest Docker security advantage is that it breaks the app into smaller parts. If the security of one part is compromised, the rest of them will not be affected.
Architecture concerns. Implementing the UNIX philosophy “do one thing well,” Docker runs one process per container by default.
As often there is maybe no absolute answer on how to handle these cases. Can you share some experiences or more insights about this topic? For me both approached seem legit.
Spot on. There is no absolute answer to this question.
Now, having said that, I think that there are substantial advantages to the single-process-per-container model, because that really encourages you to create containers that are composable (like lego blocks: you can put them together in different combinations to solve a problem) and that are scalable (you can spin up more instances of a particular service without too much effort). By not doing crazy things like running an ssh daemon inside your container, you are discouraged from editing things "in place" and will -- hopefully -- be more likely to rely on Dockerfiles to generate your images, which leads to a much more robust, reproducible process.
On the other hand, there are some applications that don't lend
themselves well to this model. For example, if you have an
application that forks lots of child processes and doesn't properly
wait()
for them, you end up with a collection of zombie processes.
You can run a full-blown init
process to solve this particular
problem, or you can run something simple like
this (disclaimer: I wrote that) or
this.
Some applications are just really tightly coupled, and while it's
possible to run them in separate containers through liberal
application of Docker volumes and --net=container:...
, it's easier
just to let them run in the same container.
Logging in Docker is particular challenging. Running some sort of
log collector inside a container along with your application can be
one solution to that problem, but there are other solutions, too.
Logspout is an interesting
one, but I have also been looking at running systemd inside
containers in order to make use of journald
for logging. So, while
I am still running one application process per container, I also
have an init
process, and a journald
process.
So, ultimately, it really depends on the situation: both on your needs and the needs of the particular application you are trying to run. Even in situations where a single process per container isn't possible, designing containers to offer a single service still confers many of the advantages I mentioned in the first paragraph.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With