Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Java Socket:why is there is no "write timeout" for the socket

Tags:

java

sockets

There is the connecting timeout value passed to connect method, and there is the reading timeout set using setSoTimeout method. I was wondering why there is no method to set the "writing timeout"? I think there is the writing timeout concept in the TCP Protocol.

like image 918
mianlaoshu Avatar asked Jan 30 '18 14:01

mianlaoshu


1 Answers

It wouldn't be much use.

In general TCP sending is asynchronous to the application. All that send() does is put the data into the socket send buffer. It then returns, while the send buffer is emptied to the network asynchronously. So there is nothing to timeout. And the absence of a timeout does not denote that the data has been sent to the peer.

send() blocks while the send buffer is full, and it would be possible to implement a timeout on that, and indeed you can do that yourself in non-blocking mode with select(), but the problem is that what timed out could be either the current send or a prior one. So delivering a timeout would be rather confusing. Instead what is delivered when all the TCP send timers time out internally is a connection reset.

I think there is the writing timeout concept in the TCP Protocol.

There is indeed, but that's at the level where TCP is asynchronously emptying the socket send buffer. It isn't under application control.

like image 85
user207421 Avatar answered Sep 18 '22 21:09

user207421