I tried my hands with an extremely small django app that serves mainly html+static content with no db operations. The app is on nginx and uwsgi. I also have postgres installed, but for this issue, i did not do any DB operations.
I find that memory is not getting released by the uwsgi process. In this chart from newrelic, you will find that the memory occupied by the uwsgi process remains stagnant at ~100MB , though during that stagnancy there have been absolutely no activity with the website/app.
Also FYI: The app/uwsgi process when it started consumed only 56MB. I reached this ~100MB when i was testing with ab(apache benchmark) and was hitting it with -n 1000 -c 10 or around that range.
Nginx Conf
server
{
listen 80;
server_name <ip_address>;
root /var/www/mywebsite.com/;
access_log /var/www/logs/nginx_access.log;
error_log /var/www/logs/nginx_error.log;
charset utf-8;
default_type application/octet-stream;
tcp_nodelay off;
gzip on;
location /static/
{
alias /var/www/mywebsite.com/static/;
expires 30d;
access_log off;
}
location /
{
include uwsgi_params;
uwsgi_pass unix:/var/www/mywebsite.com/django.sock;
}
}
app_uwsgi.ini
[uwsgi]
plugins = python
; define variables to use in this script
project = myapp
base_dir = /var/www/mywebsite.com
app=reloc
uid = www-data
gid = www-data
; process name for easy identification in top
procname = %(project)
no-orphans = true
vacuum = true
master = true
harakiri = 30
processes = 2
processes = 2
pythonpath = %(base_dir)/
pythonpath = %(base_dir)/src
pythonpath = %(base_dir)/src/%(project)
logto = /var/www/logs/uwsgi.log
chdir = %(base_dir)/src/%(project)
module = reloc.wsgi:application
socket = /var/www/mywebsite.com/django.sock
chmod-socket = 666
chown-socket = www-data
Update 1: So it looks like, its not uwsgi, but the python processes that retain certain datastructures for faster processing.
You might also limit the maximum number of requests per worker with the max-requests option in your .ini
file. This will kill the worker that has handled the specified amount of max-requests
and spawn a new one.
It is common for web frameworks to load their code up into memory. This is not generally a problem, but it is not a bad idea to put a cap on your worker's total memory consumption as, over the course of several requests, an individual worker's memory consumption may grow.
When the worker reaches or exceeds the cap, it will restart itself once the request is served. This is done via the reload_on_rss
flag
what you want to set it to depends on the memory available on your server and the number of workers you are running.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With