Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can the thread per request model be faster than non-blocking I/O?

I remember 2 or 3 years ago reading a couple articles where people claimed that modern threading libraries were getting so good that thread-per-request servers would not only be easier to write than non-blocking servers but that they'd be faster, too. I believe this was even demonstrated in Java with a JVM that mapped Java threads to pthreads (i.e. the Java nio overhead was more than the context-switching overhead).

But now I see all the "cutting edge" servers use asynchronous libraries (Java nio, epoll, even node.js). Does this mean that async won?

like image 692
Graham Avatar asked Feb 08 '11 04:02

Graham


People also ask

Is non blocking faster than blocking?

Non blocking I/O operation improves the software's speed and processing, but there are some disadvantages. It's only effective for I/O heavy workloads. In workloads that have more I/O work than CPU work, the efficiency gain from non-blocking I/O is much higher, as you might expect.

What are the differences between a blocking IO and a nonblocking I O?

Blocking refers to operations that block further execution until that operation finishes while non-blocking refers to code that doesn't block execution.

What is thread per request model?

The Thread per Request model is in practice since the introduction of synchronous Servlet programming. This model was adopted by Servlet Containers to handle incoming requests. As request handling is synchronous, this model requires many threads to handle requests. Hence, this increases resource utilization.

When Should blocking I/O be used?

With blocking I/O, when a client makes a request to connect with the server, the thread that handles that connection is blocked until there is some data to read, or the data is fully written. Until the relevant operation is complete that thread can do nothing else but wait.


1 Answers

Not in my opinion. If both models are well implemented (this is a BIG requirement) I think that the concept of NIO should prevail.

At the heart of a computer are cores. No matter what you do, you cannot parallelize your application more than you have cores. i.e. If you have a 4 core machine, you can ONLY do 4 things at a time (I'm glossing over some details here, but that suffices for this argument).

Expanding on that thought, if you ever have more threads than cores, you have waste. That waste takes two forms. First is the overhead of the extra threads themselves. Second is the time spent switching between threads. Both are probably minor, but they are there.

Ideally, you have a single thread per core, and each of those threads is running at 100% processing speed on their core. Task switching wouldn't occur in the ideal. Of course there is the OS, but if you take a 16 core machine and leave 2-3 threads for the OS, then the remaining 13-14 go towards your app. Those threads can switch what they are doing within your app, like when they are blocked by IO requirements, but don't have to pay that cost at the OS level. Write it right into your app.

An excellent example of this scaling is seen in SEDA http://www.eecs.harvard.edu/~mdw/proj/seda/ . It showed much better scaling under load than a standard thread-per-request model.

My personal experience is with Netty. I had a simple app. I implemented it well in both Tomcat and Netty. Then I load tested it with 100s of concurrent requests (upwards of 800 I believe). Eventually Tomcat slowed to a crawl and exhibited extremely bursty/laggy behavior. Whereas the Netty implementation simply increased response time, but continued with incredibly overall throughput.

Please note, this hinges on solid implementation. NIO is still getting better with time. We are learning how to tune our servers OSes to work better with it as well as how to implement the JVMs to better leverage the OS functionality. I don't think a winner can be declared yet, but I believe NIO will be the eventual winner, and it's doing quite well already.

like image 115
rfeak Avatar answered Oct 13 '22 22:10

rfeak