I have a django+uwsgi based website. some of the tables have almost 1 million rows.
After a few website usages, the VIRT memory used the uwsgi process reaches almost 20GB...almost kill my server...
Could you someone tell what may caused this problem? is it my table rows too big? (unlikely. Pinterest has much more data). now, i had to use reload-on-as= 10024 reload-on-rss= 4800 to kill the workers every few minutes....it is painful... any help?
Here is my uwsgi.ini file
[uwsgi]
chdir = xxx
module = xxx.wsgi
master = true
processes = 2
socket =127.0.0.1:8004
chmod-socket = 664
no-orphans = true
#limit-as=256
reload-on-as= 10024
reload-on-rss= 4800
max-requests=250
uid = www-data
gid = www-data
#chmod-socket = 777
chown-socket = www-data
# clear environment on exit
vacuum = true
After some digging on stackflow and google search, here is the solution.
then I figured out the major parameter to set in uwsgi.ini is max_request
. originally, I set it as 2000. now set it as 50. so it will respawn workers when memory goes up too much.
Then i try to figure out which request causes huge data query results from database. I ended up finding this little line:
amount=sum(x.amount for x in Project.objects.all())
While Project table has over 1 million complex entries.Occupying huge memory.... since I commented this out... everything runs smooth now.
So it is good to understand how the [django query works with database]
(Sorry I don't have enough reputation to comment - so apologies if this answer doesn't help in your case)
I had the same issue running Django on uwsgi/gninx and uwsgi controlled via supervisor. uwsgi-supervisor process started using lots of memory and consuming 100% CPU so only option was to repeatedly restart uwsgi.
Turned out the solution was to set up logging in the uwsgi.ini file:
logto = /var/log/uwsgi.log
There is some discussion on this here: https://github.com/unbit/uwsgi/issues/296
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With