Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Nginx slow static file serving (slower than node?)

I have a Node.js app server sitting behind an Nginx configuration that has been working well. I'm anticipating some load increase and figured I'd get ahead by setting up another Nginx to serve the static file on the Node.js app server. So, essentially I have setup Nginx reverse proxy in front of Nginx & Node.js.

When I reload Nginx and let it start serving the requests (Nginx<->Nginx) on the routes /publicfile/, I notice a SIGNIFICANT decrease in speed. Something that took Nginx<->Node.js around 3seconds not took Nginx<->Nginx ~15seconds!

I'm new to Nginx and have spent the better part of the day on this and finally decided to post for some community help. Thanks!

The web facing Nginx nginx.conf:

http {
# Main settings
sendfile                        on;
tcp_nopush                      on;
tcp_nodelay                     on;
client_header_timeout           1m;
client_body_timeout             1m;
client_header_buffer_size       2k;
client_body_buffer_size         256k;
client_max_body_size            256m;
large_client_header_buffers     4   8k;
send_timeout                    30;
keepalive_timeout               60 60;
reset_timedout_connection       on;
server_tokens                   off;
server_name_in_redirect         off;
server_names_hash_max_size      512;
server_names_hash_bucket_size   512;

# Log format
log_format  main    '$remote_addr - $remote_user [$time_local] $request '
                    '"$status" $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';
log_format  bytes   '$body_bytes_sent';

access_log          /var/log/nginx/access.log  main;

# Mime settings
include             /etc/nginx/mime.types;
default_type        application/octet-stream;


# Compression
gzip                on;
gzip_comp_level     9;
gzip_min_length     512;
gzip_buffers        8 64k;
gzip_types          text/plain text/css text/javascript
                   application/x-javascript application/javascript;
gzip_proxied        any;


# Proxy settings
#proxy_redirect      of;
proxy_set_header    Host            $host;
proxy_set_header    X-Real-IP       $remote_addr;
proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header   Set-Cookie;
proxy_connect_timeout   90;
proxy_send_timeout  90;
proxy_read_timeout  90;
proxy_buffers       32 4k;

real_ip_header     CF-Connecting-IP;


# SSL PCI Compliance
# - removed for brevity

# Error pages
# - removed for brevity 


# Cache
proxy_cache_path /var/cache/nginx levels=2 keys_zone=cache:10m inactive=60m max_size=512m;
proxy_cache_key "$host$request_uri $cookie_user";
proxy_temp_path  /var/cache/nginx/temp;
proxy_ignore_headers Expires Cache-Control;
proxy_cache_use_stale error timeout invalid_header http_502;
proxy_cache_valid any 3d;

proxy_http_version 1.1;  # recommended with keepalive connections 
# WebSocket proxying - from http://nginx.org/en/docs/http/websocket.html
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";

map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

map $http_cookie $no_cache {
    default 0;
    ~SESS 1;
    ~wordpress_logged_in 1;
}

upstream backend {
    # my 'backend' server IP address (local network)
    server xx.xxx.xxx.xx:80;
}

# Wildcard include
include             /etc/nginx/conf.d/*.conf;
}

The web facing Nginx Server block that forwards the static files to the Nginx behind it (on another box):

server {
  listen       80 default;
  access_log  /var/log/nginx/nginx.log main;

  # pass static assets on to the app server nginx on port 80
  location ~* (/min/|/audio/|/fonts/|/images/|/js/|/styles/|/templates/|/test/|/publicfile/) {
    proxy_pass  http://backend;
  }
}

And finally the "backend" server:

http {

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
sendfile_max_chunk 32;
# server_tokens off;

# server_names_hash_bucket_size 64;

include /etc/nginx/mime.types;
default_type application/octet-stream;


access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;

server {
  root /home/admin/app/.tmp/public;

  listen      80 default;
  access_log  /var/log/nginx/app-static-assets.log;

  location /publicfile {
   alias /home/admin/APP-UPLOADS;
  }
 } 
} 
like image 280
Cory Robinson Avatar asked Aug 02 '16 03:08

Cory Robinson


People also ask

What is sendfile on Nginx?

By default, NGINX handles file transmission itself and copies the file into the buffer before sending it. Enabling the sendfile directive eliminates the step of copying the data into the buffer and enables direct copying data from one file descriptor to another.

Is Nginx asynchronous?

NGINX has a modular, event‑driven, asynchronous, single-threaded architecture that scales extremely well on generic server hardware and across multi-processor systems.

What is keepalive in Nginx?

In Nginx, keepalive is a directive that is utilized for keeping the connection open for a certain number of requests to the server or until the request timeout period has expired.


1 Answers

@keenanLawrence mentioned in the comments above, sendfile_max_chunk directive.

After setting sendfile_max_chunk to 512k, I saw a significant speed improvement in my static file (from disk) delivery from Nginx.

I experimented with it from 8k, 32k, 128k, & finally 512k The difference seems to be per server for configuration on the optimal chunk size depending on the content being delivered, threads available, & server request load.

I also noticed another significant bump in performance when I changed worker_processes auto; to worker_processes 2; which went from utilizing worker_process on every cpu to only using 2. In my case, this was more efficient since I also have Node.js app servers running on the same machine and they are also performing operations on the cpu's.

like image 86
Cory Robinson Avatar answered Sep 26 '22 05:09

Cory Robinson