From the Netty API Documentation
connectTimeoutMillis = "the connect timeout in milliseconds. 0 if disabled."
And
ReadTimeoutHandler = Raises a ReadTimeoutException when no data was read within a certain period of time.
From a client perspective, am I correct in interpreting the aforementioned as follows?
The client will attempt to connect to the host for up to "connectTimeoutMillis". If a connection is established, and a ReadTimeoutHandler is NOT added to the Pipeline, a Channel can wait on a response indefinitely. If a ReadTimeoutHandler was added to the Pipeline, a ReadTimeoutException will be raised once timeoutSeconds has elapsed.
Generally speaking, I'd like to only attempt to connect to a host for up to 'x' seconds, but if a request was sent across the wire, I'd like to wait up to 'y' seconds for the response. If it shapes/influences the answer, the client is Netty, but the server is not.
Follow-up: Is timeoutSeconds on the ReadTimeoutHandler the timeout between successive bytes read, or for the entire request/response? Example: If timeoutSeconds was 60, and a single byte (out of a total of 1024) was read every 59 seconds, would the entire response be read successfully in 60416 seconds, or would it fail because the total elapsed time exceeded 60 seconds?
ReadTimeoutHandler doesn't understand the concept of a response. It only understands either a messageReceived event in Netty 3, or an inboundBufferUpdated event in Netty 4. Speaking from an NIO perspective the exact impact of this behaviour depends on where ReadTimeoutHandler is in your pipeline. (I've never used OIO so can't say if the behaviour is exactly the same).
If ReadTimeoutHandler is below any frame decoder in your pipeline (ie closer to the network) then the behaviour you describe is correct - a single byte read will reset the timer and, as you've identified, could result in the response taking a long time to be read. If you were writing a server this could be exploited to form a denial of service attack at the cost of very little effort on behalf of the attacker.
If ReadTimeoutHandler is above your frame decoder then it applies to your entire response. I think this is the behaviour you're looking for.
Note that the ReadTimeoutHandler is also unaware of whether you have sent a request - it only cares whether data has been read from the socket. If your connection is persistent, and you only want read timeouts to fire when a request has been sent, you'll need to build a request / response aware timeout handler.
Yes, you have correctly identified the difference between connect timeout and read timeout. Note that whatever any documentation may say to the contrary, the default or zero connect timeout means about 60-70 seconds, not infinity, and you can only use the connect timeout parameter to reduce that default, not increase it.
Read timeout starts when you call read() and ends when it expires or data arrives. It is the maximum time that read() may block waiting for the first byte to arrive. It doesn't block a second time in a single invocation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With