I'm developing a .NET4-based application that has to request third-party servers in order to get information from them. I'm using HttpClient to make these HTTP requests.
I have to create a hundred or a thousand requests in a short period of time. I would like to throttle the creation of these request to a limit (defined by a constant or something) so the other servers don't receive a lot of requests.
I've checked this link out that shows how to reduce the amount of tasks created at any time.
Here is my non-working approach:
// create the factory
var factory = new TaskFactory(new LimitedConcurrencyLevelTaskScheduler(level));
// use the factory to create a new task that will create the request to the third-party server
var task = factory.StartNew(() => {
return new HttpClient().GetAsync(url);
}).Unwrap();
Of course, the problem here is that even that one task at the time is created, a lot of requests will be created and processed at the same time, because they run in another scheduler. I could not find the way to change the scheduler to the HttpClient.
How should I handle this situation? I would like limit the amount of request created to a certain limit but do not block waiting for these request to finish.
Is this possible? Any ideas?
The SocketsHttpHandler establishes a pool of connections for each unique endpoint which your application makes an outbound HTTP request to via HttpClient. On the first request to an endpoint, when no existing connections exist, a new HTTP connection will be established and used for the request.
Line 7 produces an instance of a Typed HttpClient, IThrottledHttpClient, which will be implemented in the next section. Line 9 then instructs the ThrottledHttpClient instance to issue requests against an API endpoint with a list of numbers as query parameters. In the end, lines 10 to 13 print out the results.
HttpClient in .NET Core 1.0 was built directly on top of OS platform APIs which use unmanaged code, WinHTTP for Windows and LibCurl for Linux and Mac. By August 2016 it was soon noted that the recommendation to re-use HttpClient instances to prevent socket exhaustion had one quite troublesome side-effect.
IHttpClientFactory is a contract implemented by DefaultHttpClientFactory, an opinionated factory, available since .NET Core 2.1, for creating HttpClient instances to be used in your applications. The original and well-known HttpClient class can be easily used, but in some cases, it isn't being properly used by many developers.
If you can use .Net 4.5, one way would be to use TransformBlock
from TPL Dataflow and set its MaxDegreeOfParallelism
. Something like:
var block = new TransformBlock<string, byte[]>(
url => new HttpClient().GetByteArrayAsync(url),
new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = level });
foreach (var url in urls)
block.Post(url);
block.Complete();
var result = new List<byte[]>();
while (await block.OutputAvailableAsync())
result.Add(block.Receive());
There is also another way of looking at this, through ServicePointManager
. Using that class, you can set limits on MaxServicePoints
(how many servers can you be connected to at once) and DefaultConnectionLimit
(how many connections can there be to each server). This way, you could start all your Task
s at the same moment, but only a limited amount of them would actually do something. Although limiting the number of Task
s (e.g. by using TPL Dataflow, as I suggested above) will be most likely more efficient.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With