I'm with some performance problems working with WEB Api. On my real/production code, I'll do a SOAP WS call, on this sample, I'll just sleep. I have 400+ clients sending request's to the Web API.
I guess it's a problem with web api, because if I open 5 process, I can handle more requests than when I'm only with one process.
My test async version of the controller looks like this
[HttpPost]
public Task<HttpResponseMessage> SampleRequest()
{
return Request.Content.ReadAsStringAsync()
.ContinueWith(content =>
{
Thread.Sleep(Timeout);
return new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StringContent(content.Result, Encoding.UTF8, "text/plain")
};
});
}
The sync version looks like this
[HttpPost]
public HttpResponseMessage SampleRequest()
{
var content = Request.Content.ReadAsStringAsync().Result;
Thread.Sleep(Timeout);
return new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StringContent(content, Encoding.UTF8, "text/plain")
};
}
My client code to this test, looks like this (it is configured to time out after 30 seconds)
for (int i = 0; i < numberOfRequests; i++)
{
tasks.Add(new Task(() =>
{
MakeHttpPostRequest();
}));
}
foreach (var task in tasks)
{
task.Start();
}
I was not able to put it here in a nice way, but the table with the results are available at github
The CPU, memory and disk IO is low. There's always at least 800 available threads (both worker and io threads)
public static void AvailableThreads()
{
int workerThreads;
int ioThreads;
ThreadPool.GetAvailableThreads(out workerThreads, out ioThreads);
Console.WriteLine("Available threads {0} ioThreads {1}", workerThreads, ioThreads);
}
I've configured the DefaultConnectionLimit
System.Net.ServicePointManager.DefaultConnectionLimit = Int32.MaxValue;
My question is why there's a queue to answer those request? In every test, I began with a response time almost exactly like the server Thread.Sleep() time, but the responses get slower as new request arrive.
Any tip on how I can discover where's the bootleneck?
It is a .net 4.0 solution, using self host option.
Edit: I've also tested with .net 4.5 and Web API 2.0, and got the same behaviour. First requests got the answer almost as soon as sleep expires, later it takes up to 4x the sleep time to get an answer.
Edit2: Gist of the web api1 implementation and gist of the web api2 implementation
Edit3: The MakeHttpPost method creates a new WebApiClient
Edit4:
If I change the
Thread.Sleep()
to
await Task.Delay(10000);
in the .net 4.5 version, it can handle all requests, as expected. So I don't think something related to any network issue. Since Thread.Sleep() blocks the thread and Task.Delay don't, looks like there's an issue with webapi to consume more threads? But there's available threads in the threadpool...
Edit 5: If I open 5 servers and double the number of clients, the server can respond to all requests. So looks like it's not a problem with number of request to a server, because I can 'scale' this solution running a lot of process in different ports. It's more a problem with the number of request to the same process.
Caching is one of the best ways to improve API performance. If you have requests that frequently produce the same response, a cached version of the response avoids excessive database queries. The easiest way to cache responses is to periodically expire it, or force it to expire when certain data updates happen.
API performance testing is any type of test that evaluates how well APIs perform under a given set of conditions. For example, an API performance test could assess: Whether an API returns the expected output in response to given input. This is known as functional API testing.
Run Netstat on the server having the issue, look for any Time-Waits, Fin-Wait-1, Fin-Wait-2 and RST-Wait, RST-Wait2. These are half-baked sessions whereby the stack is waiting for the other side to clean up.....OR.....the other side did send a packet in but the local machine could not process them yet depending on the Stacks' availability to do the job.
The kicker is that even sessions showing Established could be in trouble in that the time-out hasn't fired yet.
The symptoms described above are reminiscent of network or TCP/IP stack overload. Very similar behavior is seen when routers get overloaded.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With