I have a container with one Node.js script which is launched with CMD npm start
. The script runs, does some work, and exits. The node
process exits because no work is pending. The npm start
exits successfully. The container then stops.
I run this container on a Synology NAS from a cronjob via docker start xxxx
. When it finishes, I get an alert Docker container xxxx stopped unexpectedly from their alert system. docker container ls -a
shows its status as Exited (0) 5 hours ago
. If I monitor docker events
I see the event die
with exitCode=0
It seems like I need to signal to the system that the exit is expected by producing a stop
event instead of a die
event. Is that something I can do in my image or on the docker start
command line?
docker rm -f The final option for stopping a running container is to use the --force or -f flag in conjunction with the docker rm command. Typically, docker rm is used to remove an already stopped container, but the use of the -f flag will cause it to first issue a SIGKILL.
The main process inside the container has ended successfully: This is the most common reason for a Docker container to stop! When the process running inside your container ends, the container will exit. Here are a couple of examples: You run a container, which runs a shell script to perform some tasks.
Synology NAS products officially support Docker. To use Docker on your Synology NAS, you need to install the Docker app from the Synology Web GUI.
To stop one or more running Docker containers, you can use the docker stop command. The syntax is simple: $ docker stop [OPTIONS] CONTAINER [CONTAINER...] You can specify one or more containers to stop.
The Synology Docker package will generate the notification Docker container xxxx stopped unexpectedly when the following two conditions are met:
die
docker event (you can see this happen by monitoring docker events
when the container exits). This is any case where the main process in the container exits on its own. The exitCode
does not matter./var/packages/Docker/etc/container_name.config
:
{
"enabled" : true,
"exporting" : false,
"id" : "dbee87466fb70ea26cd9845fd79af16d793dc64d9453e4eba43430594ab4fa9b",
"image" : "busybox",
"is_ddsm" : false,
"is_package" : false,
"name" : "musing_cori",
"shortcut" : {
"enable_shortcut" : false,
"enable_status_page" : false,
"enable_web_page" : false,
"web_page_url" : ""
}
}
Containers are automatically enabled if you start them from the GUI. All of these things will cause the container to become "enabled" and start notifying on exit:
This is probably how your container ended up "enabled" and why it is now notifying whenever it exits. Containers created with docker run -d ...
do not start out enabled, and will not initially warn on exit. This is probably why things like docker run -it --rm busybox
and other ephemeral containers do not cause notifications.
Containers can be disabled if you stop them while they are running. There appears to be no way to disable a container which is currently stopped. So to disable a container you must start it and then stop it before it exits on its own:
Check your work by looking at /var/packages/Docker/etc/container_name.config
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With