Can anyone explain to me what is the difference between the timeout configuration on the server vesus on the client ? For example, what would happen if a client sets the sendTimeout to 5 minutes while the configuration on the server has it set for 1 minute ? Does the client prevail since it initiates the communication ?
Thanks for your help !
The most common default timeout values within Archiver are: 2 min for querying the Microsoft SQL Server. 5 min for WCF connections (This is used heavily for internal communication between GFI Archiver's own modules).
ReceiveTimeout – used by the Service Framework Layer to initialize the session-idle timeout which controls how long a session can be idle before timing out.
OpenTimeout (TimeSpan) the interval of time provided for an open operation to complete including security handshakes (WS-Trust, WS-Secure Conversation etc.). The default is 00:01:00. CloseTimeout (TimeSpan) the interval of time provided for a close operation to complete. The default is 00:01:00.
I think I got this, take a look at http://omsite.blogspot.com/2008/04/playing-with-wcf-nettcpbinding-timeouts.html.
When client initiates the call to server, the client side sendTimeout and server side receiveTimeout are in effect. The client has to send(or push) all the data before receiveTimeout set on server expires. The server has to complete its operation and return the results back to client before the sendTimeout set on the client expires.
If the roles are reversed, meaning server is opening communication back to client (like in a callback etc), then sendTimeout on server and receiveTimeout on client come into play.
There is also OpenTimeout and CloseTimeout which control the channel connection establishing timeouts and work at lower channel levels (line sockets etc)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With