There is the connecting timeout value passed to connect method, and there is the reading timeout set using setSoTimeout method. I was wondering why there is no method to set the "writing timeout"? I think there is the writing timeout concept in the TCP Protocol.
It wouldn't be much use.
In general TCP sending is asynchronous to the application. All that send()
does is put the data into the socket send buffer. It then returns, while the send buffer is emptied to the network asynchronously. So there is nothing to timeout. And the absence of a timeout does not denote that the data has been sent to the peer.
send()
blocks while the send buffer is full, and it would be possible to implement a timeout on that, and indeed you can do that yourself in non-blocking mode with select()
, but the problem is that what timed out could be either the current send or a prior one. So delivering a timeout would be rather confusing. Instead what is delivered when all the TCP send timers time out internally is a connection reset.
I think there is the writing timeout concept in the TCP Protocol.
There is indeed, but that's at the level where TCP is asynchronously emptying the socket send buffer. It isn't under application control.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With