I've run into some issue with Nginx + uWSGI + flask while trying to benchmark my flask app. The combination is quite slow as my tests have shown. I have a fresh installation of Nginx 1.1.19 and uWSGI 2.0 on a Ubuntu 12.04 VM with 4 cores and 4 GB RAM. (Nginx and uWSGI config below)
I did a benchmark of Nginx by itself serving a static 20 byte file and I was able to get as much as 80k req/sec. I then did a benchmark of Nginx + uWSGI + a very basic flask app (Hello world example on the flask site) and I was only able to get a max of 8k req/sec (a factor of 10 reduction)
I turned on logging in Nginx and uWSGI (plus stats socket) and formatted the logs to print the request processing time for both and here's what I was able to glean:
uWSGI avg. req time = 0ms
Nginx avg. req time = 125ms (Nginx log times includes time spent in uWSGI)
I did the same test with my flask app and the result followed the same pattern
uWSGI avg. req time = 4ms
Nginx avg. req time = 815ms
PROBLEM: It appears there's a huge amount of time spent in the communication between Nginx and uWSGI. Has anyone seen this problem before??? I've tried all kinds of configurations for Nginx and uWSGI all with the same result.
Note that I did the tests, with apachebench (ab), both locally on the VM and from a remote machine with the same results.
Nginx conf
user www-data;
worker_processes 4;
worker_rlimit_nofile 200000;
pid /var/run/nginx.pid;
events {
worker_connections 10240;
#multi_accept on;
#use epoll;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10;
types_hash_max_size 2048;
client_body_timeout 10;
send_timeout 2;
gzip on;
gzip_disable "msie6";
keepalive_disable "msie6";
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log off;
error_log /var/log/nginx/error.log crit;
log_format ugzip '$remote_addr - "$request" - $status $body_bytes_sent - [$request_time]';
##
# Virtual Host Configs
##
#include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
server {
listen 80;
server_name localhost;
access_log /var/log/nginx-access.log ugzip buffer=32k;
error_log /var/log/nginx-error.log;
location /myapplication {
uwsgi_pass unix:/tmp/bdds.sock;
include uwsgi_params;
uwsgi_param UWSGI_SCHEME $scheme;
}
}
}
uWSGI conf (relevant parts)
[uwsgi]
master = true
listen = 40000
chmod-socket = 666
socket = /tmp/bdds.sock
workers = 8
harakiri = 60
harakiri-verbose = true
reload-mercy = 8
logto = /var/log/uwsgi-app.log
logformat = %(proto) %(method) %(uri) - %(status) %(rsize) - [%(msecs)]
vacuum = true
no-orphans = true
#cpu-affinity = 1
stats = /tmp/stats.sock
Is this a common behavior for Nginx + uWSGI? Is there something blatantly incorrect with my configuration? I'm running this on a 4 core/4 GB RAM Xen VM with Ubuntu 12.04.
Thanks in advance.
I guess you're sending much more requests per second than your sync part (uWSGI + Flask) can handle. This makes the requests hang at the async part (nginx) for most of the time.
8k requests per second are not so bad, expecially when you compare with a 20 byte sendfile() serving that basically happens all in ram without ipc. Btw when you benchmark, you should remove any logging from uWSGI, very probably in production you will only log slow or bad requests.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With