I know docker only listens to pid 1 and in case that pid exits (or turns into a daemon) it thinks the program exited and the container is shut down.
When apache-spark is started the ./start-master.sh script how can I kept the container running?
I do not think: while true; do sleep 1000; done is an appropriate solution.
E.g. I used command: sbin/start-master.sh to start the master. But it keeps shutting down.
How to keep it running when started with docker-compose?
As mentioned in "Use of Supervisor in docker", you could use phusion/baseimage-docker as a base image in which you can register scripts as "services".
The my_init script included in that image will take care of the exit signals management.
And the processes launched by start-master.sh would still be running.
Again, that supposes you are building your apache-spark image starting from phusion/baseimage-docker.
As commented by thaJeztah, using an existing image works too: gettyimages/spark/~/dockerfile/. Its default CMD will keep the container running.
Both options are cleaner than relying on a tail -f trick, which won't handle the kill/exit signals gracefully.
Here is another solution.
Create a file spark-env.sh with the following contents and copy it into the spark conf directory.
SPARK_NO_DAEMONIZE=true
If your CMD in the Dockerfile looks like this:
CMD ["/spark/sbin/start-master.sh"]
the container will not exit.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With