I have pretty much already decided not to use asynchronous, non-blocking Java NIO. The complexity versus benefit is very questionable in general, and I think it's not worth it in this project particularly.
But most of what I read about NIO, and comparisons with older java.io.* focuses on non-blocking, asynchronous NIO versus thread-per-connection synchronous I/O using java.io.*. However, NIO can be used in synchronous, blocking, thread-per-connection mode, which is rarely discussed it seems.
Here's the question: Is there any performance advantage of synchronous, blocking NIO versus traditional synchronous, blocking I/O (java.io.*)? Both would be thread-per-connection. How does the complexity compare?
Note that this is a general question, but at the moment I am primarily concerned with TCP socket communication.
Java IO(Input/Output) is used to perform read and write operations. The java.io package contains all the classes required for input and output operation. Whereas, Java NIO (New IO) was introduced from JDK 4 to implement high-speed IO operations. It is an alternative to the standard IO API's.
Java NIO is an asynchronous IO or non-blocking IO. For instance, a thread needs some data from the buffer. While the channel reads data into the buffer, the thread can do something else. Once data is read into the buffer, the thread can then continue processing it.
Java NIO is a buffer oriented package. It means that the data can be written/read to/from a buffer which further processed using a channel. Here, the buffers act as a container for the data as it holds the primitive data types and provides an overview of the other NIO packages.
Java NIO is considered to be faster than regular IO because: Java NIO supports non-blocking mode. Non-blocking IO is faster than blocking IO because it does not require a dedicated thread per connection.
An advantage of NIO over "traditional" IO is that NIO can use direct buffers that allow the OS to use DMA for some operations (e.g. reading from a network connection directly into a memory-mapped file) and thereby avoid copying data to intermediate buffers.
If you're moving large amounts of data in a scenario where this technique does avoid copy operations that would otherwise be performed, this can have a big impact on performance.
It basically boils down the number of concurrent connections and how busy those connections are. Blocking (standard thread per connection) is faster, both in latency and throughput (about twice as fast for a simple echo server). So if your system can cope with maintaining a thread for each connection (<1000 connections as a rule of thumb) go for the blocking approach. If you have lots of mostly idle connections (e.g. Comet long poll requests or IMAP idle connections) then switching to a non-blocking architecture could help scale your system.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With