Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is gunicorn behind nginx with ssl 88% slower than gunicorn alone?

So, I have a simple Flask API application that is running on gunicorn running tornado workers. The gunicorn command line is:

gunicorn -w 64 --backlog 2048 --keep-alive 5 -k tornado -b 0.0.0.0:5005 --pid /tmp/gunicorn_api.pid api:APP

When I run Apache Benchmark from another server directly against gunicorn, here are the relevant results:

ab -n 1000 -c 1000 'http://****:5005/v1/location/info?location=448&ticket=55384&details=true&format=json&key=****&use_cached=true'
Requests per second:    2823.71 [#/sec] (mean)
Time per request:       354.144 [ms] (mean)
Time per request:       0.354 [ms] (mean, across all concurrent requests)
Transfer rate:          2669.29 [Kbytes/sec] received

So we're getting close to 3k reqs/sec for performance.

Now, I need SSL. So I'm running nginx as a reverse proxy. Here is what the same benchmark looks against nginx on the same server:

ab -n 1000 -c 1000 'https://****/v1/location/info?location=448&ticket=55384&details=true&format=json&key=****&use_cached=true'
Requests per second:    355.16 [#/sec] (mean)
Time per request:       2815.621 [ms] (mean)
Time per request:       2.816 [ms] (mean, across all concurrent requests)
Transfer rate:          352.73 [Kbytes/sec] received

That's a drop in performance of 87.4%. But for the life of me, I cannot figure out what is wrong with my nginx setup. Which is this:

upstream sdn_api{
    server 127.0.0.1:5005;

    keepalive 100;
}

server {
        listen [::]:443;

    ssl on;
    ssl_certificate /etc/ssl/certs/api.sdninja.com.crt;
    ssl_certificate_key /etc/ssl/private/api.sdninja.com.key;
    ssl_protocols SSLv3 TLSv1;
    ssl_ciphers ALL:!kEDH:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM;
    ssl_session_cache shared:SSL:10m;

    server_name api.*****.com;
    access_log  /var/log/nginx/sdn_api.log;

    location / {
        proxy_pass http://sdn_api;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        client_max_body_size 100M;
        client_body_buffer_size 1m;
        proxy_intercept_errors on;
        proxy_buffering on;
        proxy_buffer_size 128k;
        proxy_buffers 256 16k;
        proxy_busy_buffers_size 256k;
        proxy_temp_file_write_size 256k;
        proxy_max_temp_file_size 0;
        proxy_read_timeout 300;
    }

}

And my nginx.conf:

user www-data;
worker_processes 8;
pid /var/run/nginx.pid;

events {
    worker_connections 2048;
    # multi_accept on;
}

http {

    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    # server_tokens off;

    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    gzip off;
    gzip_disable "msie6";

    # gzip_vary on;
    # gzip_proxied any;
    # gzip_comp_level 6;
    # gzip_buffers 16 8k;
    # gzip_http_version 1.1;
    # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

    ##
    # nginx-naxsi config
    ##
    # Uncomment it if you installed nginx-naxsi
    ##

    #include /etc/nginx/naxsi_core.rules;

    ##
    # nginx-passenger config
    ##
    # Uncomment it if you installed nginx-passenger
    ##

    #passenger_root /usr;
    #passenger_ruby /usr/bin/ruby;

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

So does anyone have any idea why it's running so slow with this config? Thanks!

like image 398
George Sibble Avatar asked Nov 03 '22 05:11

George Sibble


1 Answers

A large part of HTTPS overhead is in the handshake. Pass -k to ab to enable persistent connections. You will see that the benchmark is now significantly faster.

like image 62
Hongli Avatar answered Nov 10 '22 20:11

Hongli