I have a Nginx web server with uWSGI app server installed on a single CPU Ubuntu 14.04 image.
This uWSGI app server successfully handles a Flask app requests. The problem I am facing is that sometimes requests from a single client will time out for an extended period of time (1-2 hours).
This was happening without specifying workers or threads in my uwsgi.conf file. Is there an ideal amount of workers/threads to be used per CPU?
I am using emperor service to start the uWSGI app server. This is what my uwsgi.conf looks like
description "uWSGI"
start on runlevel [2345]
stop on runlevel [06]
respawn
env UWSGI=/var/www/parachute_server/venv/bin/uwsgi
env LOGTO=/var/log/uwsgi/emperor.log
exec $UWSGI --master --workers 2 --threads 2 --emperor /etc/uwsgi/vassals --die-on-term --uid www-data --gid www-data --logto $LOGTO --stats 127.0.0.1:9191
Could this be a performance problem in regards to nginx / uwsgi or is it more probable that these timeouts are occuring because I am only using a single CPU?
Any help is much appreciated!
This configuration will tell uWSGI to run up to 10 workers under load. If the app is idle uWSGI will stop workers but it will always leave at least 2 of them running. With cheaper-initial you can control how many workers should be spawned at startup.
yes, every single thread can manage a single request, so if you have 3 processes and 5 threads you can manage 15 concurrent requests. Save this answer. Show activity on this post. when hosting python behind uWSGI, it can only run as many simultaneous requests as processes there are.
Mandatory options. By default uWSGI does not enable threading support within the Python interpreter core. This means it is not possible to create background threads from Python code.
The threads option is used to tell uWSGI to start our application in prethreaded mode. That essentially means it is launching the application across multiple threads, making our four processes essentially eight processes.
Interesting issue you have...
Generally, you'd specify at least 2 * #CPUs + 1. This is because uWSGI might be performing a read/write to a socket, and then you'll have another worker accepting requests. Also, the threads
flag is useful if your workers are synchronous, because they can notify the master thread that they are still busy working and so preventing a timeout.
I think having one worker was the reason for your timeout (blocking all other requests), but you should inspect your responses from your app. If they are taking a long time (say reading from db), you'll want to adjust the uwsgi_read_timeout
directive in Nginx to allow uWSGI to process the request.
I hope this helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With