Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the difference between thread per connection vs thread per request?

Can you please explain the two methodologies which has been implemented in various servlet implementations:

  1. Thread per connection
  2. Thread per request

Which of the above two strategies scales better and why?

like image 474
Geek Avatar asked Mar 05 '13 06:03

Geek


People also ask

What is a thread request?

Thread-per-request: this model handles each request from a client in a separate thread of control. This model is useful for servers that handle long-duration requests (such as database queries) from multiple clients.

What is thread per request model?

The thread-per-request model requires that you design your application servers to be thread-safe, which means that you must implement concurrency mechanisms to control access to data that might be shared among multiple server objects.

What is the difference between thread and thread pool?

A thread pool is a collection of threads which are assigned to perform uniformed tasks. The advantages of using thread pool pattern is that you can define how many threads is allowed to execute simultaneously.

What are threads in servers?

A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of registers, ( and a thread ID. ) Traditional ( heavyweight ) processes have a single thread of control - There is one program counter, and one sequence of instructions that can be carried out at any given time.


2 Answers

Which of the above two strategies scales better and why?

Thread-per-request scales better than thread-per-connection.

Java threads are rather expensive, typically using a 1Mb memory segment each, whether they are active or idle. If you give each connection its own thread, the thread will typically sit idle between successive requests on the connection. Ultimately the framework needs to either stop accepting new connections ('cos it can't create any more threads) or start disconnecting old connections (which leads to connection churn if / when the user wakes up).

HTTP connection requires significantly less resources than a thread stack, although there is a limit of 64K open connections per IP address, due to the way that TCP/IP works.

By contrast, in the thread-per-request model, the thread is only associated while a request is being processed. That usually means that the service needs fewer threads to handle the same number of users. And since threads use significant resources, that means that the service will be more scalable.

(And note that thread-per-request does not mean that the framework has to close the TCP connection between HTTP request ...)


Having said that, the thread-per-request model is not ideal when there are long pauses during the processing of each request. (And it is especially non-ideal when the service uses the comet approach which involves keeping the reply stream open for a long time.) To support this, the Servlet 3.0 spec provides an "asynchronous servlet" mechanism which allows a servlet's request method to suspend its association with the current request thread. This releases the thread to go and process another request.

If the web application can be designed to use the "asynchronous" mechanism, it is likely to be more scalable than either thread-per-request or thread-per-connection.


FOLLOWUP

Let's assume a single webpage with 1000 images. This results in 1001 HTTP requests. Further let's assume HTTP persistent connections is used. With the TPR strategy, this will result in 1001 thread pool management operations (TPMO). With the TPC strategy, this will result in 1 TPMO... Now depending on the actual costs for a single TPMO, I can imagine scenarios where TPC may scale better then TPR.

I think there are some things you haven't considered:

  • The web browser faced with lots of URLs to fetch to complete a page may well open multiple connections.

  • With TPC and persistent connections, the thread has to wait for the client to receive the response and send the next request. This wait time could be significant if the network latency is high.

  • The server has no way of knowing when a given (persistent) connection can be closed. If the browser doesn't close it, it could "linger", tying down the TPC thread until the server times out the connection.

  • The TPMO overheads are not huge, especially when you separate the pool overheads from the context switch overheads. (You need to do that, since TPC is going to incur context switches on a persistent connections; see above.)

My feeling is that these factors are likely to outweigh the TPMO saving of having one thread dedicated to each connection.

like image 112
Stephen C Avatar answered Oct 11 '22 15:10

Stephen C


HTTP 1.1 - Has support for persistent connections which means more than one request/response can be received/sent using the same HTTP connection. So to run those requests received using the same connection in parallel a new Thread is created for each request.

HTTP 1.0 - In this version only one request was received using the connection and the connection was closed after sending the response. So only one thread was created for one connection.

like image 22
Narendra Pathai Avatar answered Oct 11 '22 15:10

Narendra Pathai