Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Unable to turn off chunked transfer encoding in nginx with gzip for static assets served from Node backend

We have a Node/express web app that is serving static assets in addition to normal content, via express.static(). There is an nginx server in front of it that is currently configured to gzip these static asset requests, if the user agent is up for it.

However, though nginx is doing the gzip as expected, it is dropping the Content-Length header from the origin, and setting Transfer-Encoding: chunked instead. This breaks caching on our CDN.

Below are the responses for a typical static asset request (a JS file in this case), from the node backend, and from nginx:

Request:

curl -s -D - 'http://my_node_app/res/my_js.js' -H 'Accept-Encoding: gzip, deflate, sdch' -H 'Connection: keep-alive' --compressed -o /dev/null

Response Headers from Node:

HTTP/1.1 200 OK
Accept-Ranges: bytes
Date: Wed, 07 Jan 2015 02:24:55 GMT
Cache-Control: public, max-age=0
Last-Modified: Wed, 07 Jan 2015 01:12:05 GMT
Content-Type: application/javascript
Content-Length: 37386   // <--- The expected header
Connection: keep-alive

Response Headers from nginx:

HTTP/1.1 200 OK
Server: nginx
Date: Wed, 07 Jan 2015 02:24:55 GMT
Content-Type: application/javascript
Transfer-Encoding: chunked  // <--- The problematic header
Connection: keep-alive
Vary: Accept-Encoding
Cache-Control: public, max-age=0
Last-Modified: Wed, 07 Jan 2015 01:12:05 GMT
Content-Encoding: gzip

Our current nginx configuration for the static assets location looks like below:

nginx config:

# cache file paths that start with /res/
location /res/ {
    limit_except GET HEAD { }

    # http://nginx.com/resources/admin-guide/caching/
    # http://nginx.org/en/docs/http/ngx_http_proxy_module.html

    proxy_buffers 8 128k;
    #proxy_buffer_size 256k;
    #proxy_busy_buffers_size 256k;

    # The cache depends on proxy buffers, and will not work if proxy_buffering is set to off.
    proxy_buffering     on;
    proxy_http_version  1.1;
    proxy_set_header  Connection "";
    proxy_connect_timeout  2s;
    proxy_read_timeout  5s;
    proxy_pass          http://node_backend;

    chunked_transfer_encoding off;

    proxy_cache         my_app;
    proxy_cache_valid   15m;
    proxy_cache_key     $uri$is_args$args;
}

As can be seen from the above config, even though we've explicitly set chunked_transfer_encoding off for such paths as per the nginx docs, have proxy_buffering on, and have a big enough proxy_buffers size, the response is still being chunked.

What are we missing here?

--Edit 1: version info--

$ nginx -v
nginx version: nginx/1.6.1

$ node -v
v0.10.30

--Edit 2: nginx gzip config--

# http://nginx.org/en/docs/http/ngx_http_gzip_module.html
gzip on;
gzip_buffers 32 4k;
gzip_comp_level 1;
gzip_min_length 1000;
#gzip_http_version 1.0;
gzip_types application/javascript text/css
gzip_proxied any;
gzip_vary on;
like image 986
kodeninja Avatar asked Jan 07 '15 21:01

kodeninja


1 Answers

You are correct, let me elaborate.

The headers are the first thing that need to get sent. However since you are using streaming compression, the final size is uncertain. You only know the size of the uncompressed asset, and sending a Content-Length too large would also be incorrect.

Thus, there are two options:

  1. transfer encoding chunked
  2. Completely Compress the asset before sending any data, so the compressed size is known

Currently, you're experiencing the first case, and it sounds like you really need the second. The easiest way to get the second case is to turn on gzip_static as @kodeninja said in the comments.

like image 96
EnabrenTane Avatar answered Oct 18 '22 09:10

EnabrenTane