I am using RestSharp as the underlying HTTP client library to make a stress/throughput test client on a black box service. Threadpool and Servicepoint connection limits have been lifted to 5000, but that shouldn't be much of a worry as we are testing around 500-1000 requests per second. A high-resolution (microsecond) timer component is used to throw out requests at the rate we want to test.
The RestSharp code roughly goes
restClient.ExecuteAsync(postRequest, res =>
{
stopwatch.Stop();
lock (this.countLocker)
{
this.roundTrips.Add(stopwatch.ElapsedMilliseconds);
if (res.ResponseStatus == ResponseStatus.Completed &&
(res.StatusCode == HttpStatusCode.OK ||
res.StatusCode == HttpStatusCode.NoContent))
{
this.responseCount++;
}
else
{
// Treat all other status codes as errors.
this.reportError(res);
}
}
});
On pumping too many requests, we'd observe the service would after some time spill some error 503 responses, but RestSharp treats those as Complete responses since that's a valid response from the server; no actual exception got thrown.
What is not clear is when RestSharp encounters exception due to underlying connection error
The underlying connection was closed: A connection that was expected to be kept alive was closed by the server.
at RestSharp.Http.GetRawResponseAsync(IAsyncResult result, Action`1 callback)
at RestSharp.Http.ResponseCallback(IAsyncResult result, Action`1 callback)
or
The underlying connection was closed: An unexpected error occurred on a receive.
at RestSharp.Http.GetRawResponseAsync(IAsyncResult result, Action`1 callback)
at RestSharp.Http.ResponseCallback(IAsyncResult result, Action`1 callback)
Which seems to suggest RestSharp is using HTTP keep-alive for connections. Is there a way to control this behaviour? I cannot seem to locate any setting to instruct RestSharp to not use keep-alive.
Beyond that, am also trying to gain better understanding on how to investigate further on the actual issue of server breaking those connections? Is it simply a matter of client issuing accumulating more connections than the server can deal with? (since it cannot keep up with its response rate)
After additional investigation and tinkering with the test client code, I think I've come to some understanding what is happening.
By adding a monitoring count to the number of open HTTP/TCP connections to the server, it can be observed that RestSharp keeps HTTP connections around in keep-alive state and reuses them for subsequent requests. For a request rate and throughput that is sustainable, there is no problem; RestSharp can reuse a certain pool of connections and keep them alive perpetually.
But for rates that server might not fulfil from time to time, the client has to open more connections since previous HTTP requests have not completed; resulting in a jump in open connections. Later on, if it ends up reusing a reserved HTTP connection that it thinks the server is still honouring, it would end up with that "A connection that was expected to be kept alive was closed by the server." message.
I solved it with this simple line
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls;
The solution for this will be setting keep-alive to false.
var client = new RestClient("https://test.com");
client.ConfigureWebRequest((r) =>
{
r.ServicePoint.Expect100Continue = false;
r.KeepAlive = false;
});
After this change, each connection will be closed after getting the response. This will increase a bit of effort of opening the connection each time. However, the weird unexpected connection disclosed error will be prevented.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With