I am running a Flask application and hosting it on Kubernetes from a Docker container. Gunicorn is managing workers that reply to API requests.
The following warning message is a regular occurrence, and it seems like requests are being canceled for some reason. On Kubernetes, the pod is showing no odd behavior or restarts and stays within 80% of its memory and CPU limits.
[2021-03-31 16:30:31 +0200] [1] [WARNING] Worker with pid 26 was terminated due to signal 9
How can we find out why these workers are killed?
I encountered the same warning message.
[WARNING] Worker with pid 71 was terminated due to signal 9
I came across this faq, which says that "A common cause of SIGKILL is when OOM killer terminates a process due to low memory condition."
I used dmesg realized that indeed it was killed because it was running out of memory.
Out of memory: Killed process 776660 (gunicorn)
In our case application was taking around 5-7 minutes to load ML models and dictionaries into memory. So adding timeout period of 600 seconds solved the problem for us.
gunicorn main:app --workers 1 --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:8443 --timeout 600
I encountered the same warning message when I limit the docker's memory, use like -m 3000m
.
see docker-memory
and
gunicorn-Why are Workers Silently Killed?
The simple way to avoid this is set a high memory for docker or not set.
I was using AWS Beanstalk to deploy my flask application and I had a similar error.
I was using the t2.micro instance and when I changed it to t2.medium my app worked fine. In addition to this I changed to the timeout in my nginx config file.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With