When I run the following command, I expect the exit code to be 0 since my combined
container runs a test that successfully exits with an exit code of 0.
docker-compose up --build --exit-code-from combined
Unfortunately, I consistently receive an exit code of 137 even when the tests in my combined
container run successfully and I exit that container with an exit code of 0 (more details on how that happens are specified below).
Below is my docker-compose version:
docker-compose version 1.25.0, build 0a186604
According to this post, the exit code of 137 can be due to two main issues.
docker stop
and the app is not gracefully handling SIGTERMI know the 137 exit code is not because my container has run out of memory. When I run docker inspect <container-id>
, I can see that "OOMKilled" is false as shown in the snippet below. I also have 6GB of memory allocated to the Docker Engine which is plenty for my application.
[
{
"Id": "db4a48c8e4bab69edff479b59d7697362762a8083db2b2088c58945fcb005625",
"Created": "2019-12-12T01:43:16.9813461Z",
"Path": "/scripts/init.sh",
"Args": [],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false, <---- shows container did not run out of memory
"Dead": false,
"Pid": 0,
"ExitCode": 137,
"Error": "",
"StartedAt": "2019-12-12T01:44:01.346592Z",
"FinishedAt": "2019-12-12T01:44:11.5407553Z"
},
My container doesn't exit from a docker stop
so I don't think the first reason is relevant to my situation either.
How my Docker containers are set up
I have two Docker containers:
I'm using a docker-compose.yml file to start both containers.
version: '3'
services:
db:
build:
context: .
dockerfile: ./docker/db/Dockerfile
container_name: b-db
restart: unless-stopped
volumes:
- dbdata:/data/db
ports:
- "27017:27017"
networks:
- app-network
combined:
build:
context: .
dockerfile: ./docker/combined/Dockerfile
container_name: b-combined
restart: unless-stopped
env_file: .env
ports:
- "5000:5000"
- "8080:8080"
networks:
- app-network
depends_on:
- db
networks:
app-network:
driver: bridge
volumes:
dbdata:
node_modules:
Below is the Dockerfile for the combined
service in docker-compose.yml
.
FROM cypress/included:3.4.1
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
RUN npm install -g history-server nodemon
RUN npm run build-test
EXPOSE 8080
COPY ./docker/combined/init.sh /scripts/init.sh
RUN ["chmod", "+x", "/scripts/init.sh"]
ENTRYPOINT [ "/scripts/init.sh" ]
Below is what is in my init.sh
file.
#!/bin/bash
# Start front end server
history-server dist -p 8080 &
front_pid=$!
# Start back end server that interacts with DB
nodemon -L server &
back_pid=$!
# Run tests
NODE_ENV=test $(npm bin)/cypress run --config video=false --browser chrome
# Error code of the test
test_exit_code=$?
echo "TEST ENDED WITH EXIT CODE OF: $test_exit_code"
# End front and backend server
kill -9 $front_pid
kill -9 $back_pid
# Exit with the error code of the test
echo "EXITING SCRIPT WITH EXIT CODE OF: $test_exit_code"
exit "$test_exit_code"
Below is the Dockerfile for my db
service. All its doing is copying some local data into the Docker container and then initialising the database with this data.
FROM mongo:3.6.14-xenial
COPY ./dump/ /tmp/dump/
COPY mongo_restore.sh /docker-entrypoint-initdb.d/
RUN chmod 777 /docker-entrypoint-initdb.d/mongo_restore.sh
Below is what is in mongo_restore.sh
.
#!/bin/bash
# Creates db using copied data
mongorestore /tmp/dump
Below are the last few lines of output when I run docker-compose up --build --exit-code-from combined; echo $?
.
...
b-combined | user disconnected
b-combined | Mongoose disconnected
b-combined | Mongoose disconnected through Heroku app shutdown
b-combined | TEST ENDED WITH EXIT CODE OF: 0 ===========================
b-combined | EXITING SCRIPT WITH EXIT CODE OF: 0 =====================================
Aborting on container exit...
Stopping b-combined ... done
137
What is confusing as you can see above, is that the test and script ended with exit code of 0 since all my tests passed successfully but the container still exited with an exit code of 137.
What is even more confusing is that when I comment out the following line (which runs my Cypress integration tests) from my init.sh
file, the container exits with a 0 exit code as shown below.
NODE_ENV=test $(npm bin)/cypress run --config video=false --browser chrome
Below is the output I receive when I comment out / remove the above line from init.sh
, which is a command that runs my Cypress integration tests.
...
b-combined | TEST ENDED WITH EXIT CODE OF: 0 ===========================
b-combined | EXITING SCRIPT WITH EXIT CODE OF: 0 =====================================
Aborting on container exit...
Stopping b-combined ... done
0
How do I get docker-compose to return me a zero exit code when my tests run successfully and a non-zero exit code when they fail?
EDIT:
After running the following docker-compose command in debug mode, I noticed that b-db seems to have some trouble shutting down and potentially is receiving a SIGKILL signal from Docker because of that.
docker-compose --log-level DEBUG up --build --exit-code-from combined; echo $?
Is this indeed the case according to the following output?
...
b-combined exited with code 0
Aborting on container exit...
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/json?limit=-1&all=1&size=0&trunc_cmd=0&filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Db-property%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D HTTP/1.1" 200 3819
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/json?limit=-1&all=0&size=0&trunc_cmd=0&filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Db-property%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D HTTP/1.1" 200 4039
http://localhost:None "POST /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/attach?logs=0&stdout=1&stderr=1&stream=1 HTTP/1.1" 101 0
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/json HTTP/1.1" 200 None
Stopping b-combined ...
Stopping b-db ...
Pending: {<Container: b-db (0626d6)>, <Container: b-combined (196f3e)>}
Starting producer thread for <Container: b-combined (196f3e)>
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
http://localhost:None "POST /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/wait HTTP/1.1" 200 32
http://localhost:None "POST /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/stop?t=10 HTTP/1.1" 204 0
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
http://localhost:None "POST /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561bStopping b-combined ... done
Finished processing: <Container: b-combined (196f3e)>
Pending: {<Container: b-db (0626d6)>}
Starting producer thread for <Container: b-db (0626d6)>
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/json HTTP/1.1" 200 None
Pending: set()
Pending: set()
Pending: set()
Pending: set()
Pending: set()
Pending: set()
http://localhost:None "GET /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/json HTTP/1.1" 200 None
http://localhost:None "POST /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/stop?t=10 HTTP/1.1" 204 0
http://localhost:None "POST /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/wait HTTP/1.1" 200 30
Stopping b-db ... done
Pending: set()
http://localhost:None "GET /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
137
If a few pods are consistently getting exit code 137 returned to them, then that is a sign that you need to increase the amount of space you afford to the pod. By increasing the maximum limit manually in the pods that are under the most strain, you'll be able to reduce the frequency with which this problem occurs.
The OOMKilled error, also indicated by exit code 137, means that a container or pod was terminated because they used more memory than allowed. OOM stands for “Out Of Memory”. Kubernetes allows pods to limit the resources their containers are allowed to utilize on the host machine.
You're running a shell in a container, but you haven't assigned a terminal: If you're running a container with a shell (like bash ) as the default command, then the container will exit immediately if you haven't attached an interactive terminal.
Set Maximum Memory Access To limit the maximum amount of memory usage for a container, add the --memory option to the docker run command. Alternatively, you can use the shortcut -m . Within the command, specify how much memory you want to dedicate to that specific container.
Docker exit code 137 implies Docker doesn't have enough RAM to finish the work.
Unfortunately Docker consumes a lot of RAM.
Go to Docker Desktop app > Preferences > Resources > Advanced and increase the MEMORY - best to double it.
It might be issue with RAM, I have increased the memory using Docker preferences. Started my service with docker-compose up
. The server is up for me without any issue.
https://www.petefreitag.com/item/848.cfm
The error message strikes me as: Aborting on container exit...
From docker-compose docs:
--abort-on-container-exit Stops all containers if any container was stopped.
Are you running docker-compose with this flag? If that is the case, think about what it means.
Once b-combined
is finished, it simply exits. That means, container b-db
will be forced to stop as well. Even though b-combined
returned with exit code 0, b-db
forced shutdown was likely not handled gracefully by mongodb.
EDIT: I just realized you have --exit-code-from
in the command line. That implies --abort-on-container-exit
.
Solution: b-db
needs more time to exit gracefully. Using docker-compose up --timeout 600
avoids the error.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With