Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does Nginx handle HTTP requests?

Tags:

nginx

apache

I understand thread driven that Apache uses: every connection opens up a thread and when the response is sent, the thread is closed, releasing the resources for other threads).

But I don't get the event driven design that Nginx uses. I've read some basics about event driven design .. but I don't understand how this is used by nginx to handle web requests.

Where can i read and understand how Nginx is handling the connections in an event driven way so I get why it's better, rather than just accepting that event-based design is better than thread-driven design.

like image 856
never_had_a_name Avatar asked Aug 09 '10 01:08

never_had_a_name


People also ask

Does Nginx use HTTP?

Nginx is an open-source software, whereas Nginx Plus is the commercial version that comes with user support. Both software fully support HTTP/2. To enable HTTP/2 on Nginx, there are a few things to know: It's important to enable HTTPS on your server because almost all browsers allow HTTP/2 only over HTTPS.

How many simultaneous HTTP requests can Nginx handle?

The configuration allows bursts of up to 12 requests, the first 8 of which are processed without delay. A delay is added after 8 excessive requests to enforce the 5 r/s limit. After 12 excessive requests, any further requests are rejected.

How does Nginx work internally?

How Does NGINX Work? NGINX uses a predictable process model that is tuned to the available hardware resources: The master process performs the privileged operations such as reading configuration and binding to ports, and then creates a small number of child processes (the next three types).

How does nginx proxy pass work?

Proxying in Nginx is accomplished by manipulating a request aimed at the Nginx server and passing it to other servers for the actual processing. The result of the request is passed back to Nginx, which then relays the information to the client.


1 Answers

Nginx uses the Reactor pattern. Basically, it's single-threaded (but can fork several processes to utilize multiple cores). The main event loop waits for the OS to signal a readiness event - e.g. that data is available to read from a socket, at which point it is read into a buffer and processed. The single thread can very efficiently serve tens of thousands of simultaneous connections (the thread-per-connection model would fail at this because of the huge context-switching overhead, as well as the large memory consumption, as each thread needs its own stack).

like image 190
Onestone Avatar answered Oct 05 '22 23:10

Onestone