I have a project using the official nginx
docker container from Docker Hub, launching via Docker Compose. I have healthchecks configured in Docker Compose for each of my containers, and recently the healthcheck for this nginx
container has been behaving strangely; on launching with docker-compose up -d
, all my containers launch, and begin running healthchecks, but the nginx
container looks like it never runs the healthcheck. I can manually run the script just fine if I docker exec
into the container, and the healthcheck runs normally if I restart the container.
Example output from docker ps
:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
458a55ae8971 my_custom_image "/tini -- /usr/local…" 7 minutes ago Up 7 minutes (healthy) project_worker_1
5024781b1a73 redis:3.2 "docker-entrypoint.s…" 7 minutes ago Up 7 minutes (healthy) 127.0.0.1:6379->6379/tcp project_redis_1
bd405dde8ce7 postgres:9.6 "docker-entrypoint.s…" 7 minutes ago Up 7 minutes (healthy) 127.0.0.1:15432->5432/tcp project_postgres_1
93e15c18d879 nginx:mainline "nginx -g 'daemon of…" 7 minutes ago Up 7 minutes (health: starting) 127.0.0.1:80->80/tcp, 127.0.0.1:443->443/tcp nginx
Example (partial, for brevity) output from docker inspect nginx
:
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 11568,
"ExitCode": 0,
"Error": "",
"StartedAt": "2018-02-13T21:04:22.904241169Z",
"FinishedAt": "0001-01-01T00:00:00Z",
"Health": {
"Status": "unhealthy",
"FailingStreak": 0,
"Log": []
}
},
The portion of the docker-compose.yml
defining the nginx
container:
nginx:
image: nginx:mainline
# using container_name means there will only ever be one nginx container!
container_name: nginx
restart: always
networks:
- proxynet
volumes:
- /etc/nginx/conf.d
- /etc/nginx/vhost.d
- /usr/share/nginx/html
- tlsdata:/etc/nginx/certs:ro
- attachdata:/usr/share/nginx/html/uploads:ro
- staticdata:/usr/share/nginx/html/static:ro
- ./nginx/healthcheck.sh:/bin/healthcheck.sh
healthcheck:
test: ['CMD', '/bin/healthcheck.sh']
interval: 1m
timeout: 5s
retries: 3
ports:
# Make the http/https ports available on the Docker host IPv4 loopback interface
- '127.0.0.1:80:80'
- '127.0.0.1:443:443'
The healthcheck.sh
I am loading in as a volume:
#!/bin/bash
service nginx status || exit 1
It looks like the problem is just an issue with systemd never returning from the status check when the container initially launches, and at the same time the configured healthcheck timeout does not trigger. Everything else works, and nginx
is up and responding, but it would be nice for the healthcheck to function properly without needing to manually restart each time I start up.
Is there something missing in my configuration, or a better check I can run?
We can also see the health status by running docker ps. Notice under STATUS, the status is Up with (healthy) next to it. The health status appears only when a health check is configured.
The HEALTHCHECK directive tells Docker how to determine if the state of the container is normal. This was a new directive introduced during Docker 1.12.
If a health check fails but the subsequent one passes, the container will not transition to unhealthy . It will become unhealthy after three consecutive failed checks. --timeout – Set the timeout for health check commands. Docker will treat the check as failed if the command doesn't exit within this time frame.
I have healthchecks configured in Docker Compose for each of my containers, and recently the healthcheck for this nginx container has been behaving strangely; on launching with docker-compose up -d, all my containers launch, and begin running healthchecks, but the nginx container looks like it never runs the healthcheck.
There comes the need for a Docker Health Check which helps us to find the exact status of the docker container. To configure the health check in a Docker container, you need to configure it using the command HEALTHCHECK in the Dockerfile. There are two different ways to configure the HEALTHCHECK in docker.
Here, we will check whether the nginx.conf file exists or not. We will set the Command while running the Docker Container. You can use the inspect command to determine the state of the Container. You will get all the details regarding the Container along with the states during all the health checkups.
If a SIGHUP is sent to an NGINX process in a container before the first healthcheck runs in that container, no healthchecks ever run. The final iteration I came up with modifies the nginx-gen container to poll the health of the nginx container. It looks up the health status of a container with a defined label in a loop, with a short sleep.
I think that there is no need for a custom script in this case.
Try just change your healthcheck test to
test: ["CMD", "service", "nginx", "status"]
That works fine for me.
Try to use "
instead of '
as well, just in case :)
EDIT
If you really want to force an exit 1
, in case of failure, you could use:
test: service nginx status || exit 1
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With