Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Nginx proxy buffering - changing buffer's number vs size ?

I was wondering and trying to figure out how these two settings:

proxy_buffers [number] [size];

may affect (improve / degrade) proxy server performance, and whether to change buffers' size, or the number, or both...?

In my particular case, we're talking about a system serving dynamically generated binary files, that may vary in size (~60 - 200kB). Nginx serves as a load-balancer in front of 2 Tomcats that act as generators. I saw in Nginx's error.log that with default buffers' size setting all of proxied responses are cached to a file, so what I found to be logical is to change the setting to something like this:

proxy_buffers 4 32k;

and the warning message disappeared.

What's not clear to me here is if I should preferably set 1 buffer with the larger size, or several smaller buffers... E.g.:

proxy_buffers 1 128k; vs proxy_buffers 4 32k; vs proxy_buffers 8 16k;, etc...

What could be the difference, and how it may affect performance (if at all)?

like image 498
Less Avatar asked Nov 01 '15 08:11

Less


People also ask

What is nginx proxy buffer size?

By default, the buffer size is equal to one memory page. This is either 4K or 8K, depending on a platform.

What is nginx proxy buffering?

Proxy buffering means that NGINX stores the response from a server in internal buffers as it comes in, and doesn't start sending data to the client until the entire response is buffered.

Why use nginx reverse proxy?

Security and anonymity – By intercepting requests headed for your backend servers, a reverse proxy server protects their identities and acts as an additional defense against security attacks.

Why Nginx is required?

Because it can handle a high volume of connections, NGINX is commonly used as a reverse proxy and load balancer to manage incoming traffic and distribute it to slower upstream servers – anything from legacy database servers to microservices.


1 Answers

First, it's a good idea to see what the documentation says about the directives:

http://nginx.org/r/proxy_buffers

Syntax: proxy_buffers number size;
Default: proxy_buffers 8 4k|8k;
Context: http, server, location

Sets the number and size of the buffers used for reading a response from the proxied server, for a single connection. By default, the buffer size is equal to one memory page. This is either 4K or 8K, depending on a platform.

Jumping a level up to http://nginx.org/r/proxy_bufferring provides a bit more explanation:

When buffering is enabled, nginx receives a response from the proxied server as soon as possible, saving it into the buffers set by the proxy_buffer_size and proxy_buffers directives. If the whole response does not fit into memory, a part of it can be saved to a temporary file on the disk. …

When buffering is disabled, the response is passed to a client synchronously, immediately as it is received. …


So, what does all of that mean?

  1. First of all, Nginx is highly optimised to be the most efficient with the resources at stake. It's well known for using the least amount of resources to service each individual connection. An extra 4KB increase per connection would be quite an increase — that's how efficient it is.

  2. You may notice that the size of the buffer is chosen to be equivalent to page size of the platform at stake. https://en.wikipedia.org/wiki/Page_(computer_memory) Basically, long story short, the absolute best number might as well go beyond the scope of this question on StackOverflow, and may as well be highly dependent on operating system and CPU architecture.

  3. Realistically, the difference between a bigger number of smaller buffers, or a smaller number of bigger buffers, may depend on the memory allocator provided by the operating system, as well as how much memory you have, and how much memory you want to be wasted by being allocated without being used for a good purpose.

    E.g., I would not set it to proxy_buffers 1 1024k, because then you'll be allocating a 1MB buffer for every buffered connection, even if the content could still easily fit in a 4KB one, potentially wasting extra memory (although, of course, there's also the little-known fact that unused-but-allocated-memory is virtually free since 1980s). There's likely a good reason that the default number of buffers have been chosen to be 8 as well.

  4. Increasing the buffers at all might actually be a bit pointless if you actually do caching of the responses of these binary files with http://nginx.org/r/proxy_cache, because nginx will still be writing it to disc for caching, and you might as well not waste the extra memory for buffering these responses.

    A good operating system should be capable of already doing appropriate caching of the stuff that gets written to disc through the file-system buffer-cache functionality — see https://www.tldp.org/LDP/sag/html/buffer-cache.html (and possibly the somewhat strangely-named article at https://en.wikipedia.org/wiki/Page_cache, as "disk-buffer" name was already taken for the HDD hardware article) — so, there's likely little need to duplicate buffering directly within nginx. You might also take a look at varnish-cache for some additional ideas and inspiration about the subject of multi-level caching, and the fact that good operating systems are already supposed to take good care of many things that some folks still nonetheless try to mistakenly optimise through application-specific functionality.

  5. Likewise, if you don't actually do the caching of the responses, then you might as well ask yourself whether or not buffering is or is not appropriate in the first place.

    Realistically, buffering may come useful to better protect your upstreams from the slowloris attack vector — https://en.wikipedia.org/wiki/Slowloris_(computer_security) — however, if you do let your nginx have megabyte-sized buffers, then, essentially, you're starting to expose nginx itself for consuming an unreasonable amount of resources to service clients with malicious intents.

  6. If the responses are too large, you might want to look into optimising it at the response level. Doing splitting of some content into individual files; doing compression on the file level; doing compression with gzip on HTTP Content-Encoding level etc.


TL;DR: this is really a pretty broad question, and there are too many variables that require non-trivial investigation to come up with the absolute best answer for any given situation.

like image 100
cnst Avatar answered Sep 22 '22 18:09

cnst