I'm new to NIO, and I am trying to figure out how Jetty leverages NIO.
My understanding of how traditional servlet containers that use Blocking IO service a request is as follows:
doGet
etc) is invokedInputStream
and OutputStream
InputStream
and writes to the OutputStream
InputStream
and OutputStream
are basically tied to the respective streams of the underlying Socket
What is different when an NIO connector is used? My guess is along the following lines:
InputStream
OutputStream
)doGet
etc) handing the above wrapper streamsSocketChannel
From the Jetty documentation, I found the following:
SelectChannelConnector - This connector uses efficient NIO buffers with a non-blocking threading model. Jetty uses Direct NIO buffers, and allocates threads only to connections with requests. Synchronization simulates blocking for the servlet API, and any unflushed content at the end of request handling is written asynchronously.
I'm not sure I understand what Synchronization simulates blocking for the servlet API
means?
Jetty uses Direct NIO buffers, and allocates threads only to connections with requests.
The Jetty Server is the plumbing between a collection of Connectors that accept HTTP connections, and a collection of Handlers that service requests from the connections and produce responses, with the work being done by threads taken from a thread pool.
Jetty is an open-source project providing an HTTP server, HTTP client, and javax. servlet container.
Non-Blocking ServletsJetty has good support for asynchronous request processing.
You don't have it exactly correct. When jetty uses an NIO connector (and 9 only supports NIO) it works as follows:
When selector sees IO activity, it calls a handle method on the connection, which either:
If a thread is dispatched, then it will attempt to read the connection and parse it. What happens now depends on if the connection is http, spdy, http2 or websocket.
Once a thread is dispatched to a servlet, it looks to it like the servlet IO is blocking, but underneath the level of HttpInputStream and HttpOutputStream all the IO is async with callbacks. The blocking API uses a special blocking callback to achieve blocking. This means that if the servlet chooses to use async IO, then it is just bypassing the blocking callback and using the async API more or less directly.
This view is slightly complicated by http2 and spdy, which are multiplexed, so they can involve an extra dispatch.
Any HTTP framework that does not dispatch can go really really fast in benchmark code, but when faced with a real application that can do silly things like block on databases, files system, REST services etc... then lack of dispatch just means that one connection can hold up all the other connections on the system.
For some more info on how jetty handles async and dispatch see:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With