Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to keep Docker container running after starting services?

Tags:

docker

People also ask

How do I keep my docker container always running?

Dockerfile Command to Keep the Container Running Method 1: You can use the -t (pseudo-tty) docker parameter to keep the container running. Method 2: You can run the container directly passing the tail command via arguments as shown below. Method 3: Another method is to execute a sleep command to infinity.

How do I run a docker container forever?

The easiest way to keep the container running is to change its entry point to a command that will continue running forever. Let's use tail -f /dev/null .

How do I run a docker container without stopping?

Detaching Without StoppingPress Ctrl-P, followed by Ctrl-Q, to detach from your connection. You'll be dropped back into your shell but the previously attached process will remain alive, keeping your container running. You can check this by using docker ps to get a list of running containers.

How do you stop a container from exiting?

If there's no terminal attached, then your shell process will exit, and so the container will exit. You can stop this by adding --interactive --tty (or just -it ) to your docker run ... command, which will let you type commands into the shell.


If you are using a Dockerfile, try:

ENTRYPOINT ["tail", "-f", "/dev/null"]

(Obviously this is for dev purposes only, you shouldn't need to keep a container alive unless it's running a process eg. nginx...)


I just had the same problem and I found out that if you are running your container with the -t and -d flag, it keeps running.

docker run -td <image>

Here is what the flags do (according to docker run --help):

-d, --detach=false         Run container in background and print container ID
-t, --tty=false            Allocate a pseudo-TTY

The most important one is the -t flag. -d just lets you run the container in the background.


This is not really how you should design your Docker containers.

When designing a Docker container, you're supposed to build it such that there is only one process running (i.e. you should have one container for Nginx, and one for supervisord or the app it's running); additionally, that process should run in the foreground.

The container will "exit" when the process itself exits (in your case, that process is your bash script).


However, if you really need (or want) to run multiple service in your Docker container, consider starting from "Docker Base Image", which uses runit as a pseudo-init process (runit will stay online while Nginx and Supervisor run), which will stay in the foreground while your other processes do their thing.

They have substantial docs, so you should be able to achieve what you're trying to do reasonably easily.


The reason it exits is because the shell script is run first as PID 1 and when that's complete, PID 1 is gone, and docker only runs while PID 1 is.

You can use supervisor to do everything, if run with the "-n" flag it's told not to daemonize, so it will stay as the first process:

CMD ["/usr/bin/supervisord", "-n"]

And your supervisord.conf:

[supervisord]
nodaemon=true

[program:startup]
priority=1
command=/root/credentialize_and_run.sh
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=false
startsecs=0

[program:nginx]
priority=10
command=nginx -g "daemon off;"
stdout_logfile=/var/log/supervisor/nginx.log
stderr_logfile=/var/log/supervisor/nginx.log
autorestart=true

Then you can have as many other processes as you want and supervisor will handle the restarting of them if needed.

That way you could use supervisord in cases where you might need nginx and php5-fpm and it doesn't make much sense to have them apart.


you can run plain cat without any arguments as mentioned by bro @Sa'ad to simply keep the container working [actually doing nothing but waiting for user input] (Jenkins' Docker plugin does the same thing)


Motivation:

There is nothing wrong in running multiple processes inside of a docker container. If one likes to use docker as a light weight VM - so be it. Others like to split their applications into micro services. Me thinks: A LAMP stack in one container? Just great.

The answer:

Stick with a good base image like the phusion base image. There may be others. Please comment.

And this is yet just another plead for supervisor. Because the phusion base image is providing supervisor besides of some other things like cron and locale setup. Stuff you like to have setup when running such a light weight VM. For what it's worth it also provides ssh connections into the container.

The phusion image itself will just start and keep running if you issue this basic docker run statement:

moin@stretchDEV:~$ docker run -d phusion/baseimage
521e8a12f6ff844fb142d0e2587ed33cdc82b70aa64cce07ed6c0226d857b367
moin@stretchDEV:~$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS
521e8a12f6ff        phusion/baseimage   "/sbin/my_init"     12 seconds ago      Up 11 seconds

Or dead simple:

If a base image is not for you... For the quick CMD to keep it running I would suppose something like this for bash:

CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"

Or this for busybox:

CMD exec /bin/sh -c "trap : TERM INT; (while true; do sleep 1000; done) & wait"

This is nice, because it will exit immediately on a docker stop.

Just plain sleep or cat will take a few seconds before the container is forcefully killed by docker.

Updates

As response to Charles Desbiens concerning running multiple processes in one container:

This is an opinion. And the docs are pointing in this direction. A quote: "It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application." For sure it obviously much more powerful to devide your complex service into multiple containers. But there are situations where it can be beneficial to go the one container route. Especially for appliances. The GitLab Docker image is my favourite example of a multi process container. It makes deployment of this complex system easy. There is no way for mis-configuration. GitLab retains all control over their appliance. Win-Win.