Let's look at the following 2 examples:
public class MyClass
{
public async Task Main()
{
var result1 = "";
var result2 = "";
var request1 = await DelayMe();
var request2 = await DelayMe();
result1 = request1;
result2 = request2;
}
private static async Task<String> DelayMe()
{
await Task.Delay(2000);
return "";
}
}
And:
public class MyClass
{
public async Task Main()
{
var result1 = "";
var result2 = "";
var request1 = DelayMe();
var request2 = DelayMe();
result1 = await request1;
result2 = await request2;
}
private static async Task<String> DelayMe()
{
await Task.Delay(2000);
return "";
}
}
In the first example shows how you would typically write async await
code where one thing happens after the other and awaited properly.
The second one is first calling the async Task
method but it's await
ing it later.
The first example takes a bit over 4000ms
to execute because the await
is computing the first request before it makes the second; but the second example takes a bit over 2000ms
. This happens because the Task
actually starts running as soon as the execution steps over the var request1 = DelayMe();
line which means that request1
and request2
are running in parallel. At this point it looks like the await
keyword just ensures that the Task
is computed.
The second approach feels and acts like a await Task.WhenAll(request1, request2)
, but in this scenario, if something fails in the 2 requests, you will get an exception instantly instead of waiting for everything to compute and then getting an AggregateException
.
My question is that is there a drawback (performance or otherwise) in using the second approach to run multiple await
able Task
s in parallel when the result of one doesn't depend on the execution of the other? Looking at the lowered code, it looks like the second example generates an equal amount of System.Threading.Tasks.Task
1per awaited item while the first one doesn't. Is this still going through the
async await` state-machine flow?
if something fails in the 2 requests, you will get an exception instantly instead of waiting for everything to compute and then getting an AggregateException.
If something fails in the first request, then yes. If something fails in the second request, then no, you wouldn't check the second request results until that task is await
ed.
My question is that is there a drawback (performance or otherwise) in using the second approach to run multiple awaitable Tasks in parallel when the result of one doesn't depend on the execution of the other? Looking at the lowered code, it looks like the second example generates an equal amount of System.Threading.Tasks.Task1per awaited item while the first one doesn't. Is this still going through theasync await` state-machine flow?
It's still going through the state machine flow. I tend to recommend await Task.WhenAll
because the intent of the code is more explicit, but there are some people who don't like the "always wait even when there are exceptions" behavior. The flip side to that is that Task.WhenAll
always collects all the exceptions - if you have fail-fast behavior, then some exceptions could be ignored.
Regarding performance, concurrent execution would be better because you can do multiple operations concurrently. There's no danger of threadpool exhaustion from this because async
/await
does not use additional threads.
As a side note, I recommend using the term "asynchronous concurrency" for this rather than "parallel", since to many people "parallel" implies parallel processing, i.e., Parallel
or PLINQ, which would be the wrong technologies to use in this case.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With