It's recommended that one use ConfigureAwait(false)
whenever when you can, especially in libraries because it can help avoid deadlocks and improve performance.
I have written a library that makes heavy use of async (accesses web services for a DB). The users of the library were getting a deadlock and after much painful debugging and tinkering I tracked it down to the single use of await Task.Yield()
. Everywhere else that I have an await, I use .ConfigureAwait(false)
, however that is not supported on Task.Yield()
.
What is the recommended solution for situations where one needs the equivalent of Task.Yield().ConfigureAwait(false)
?
I've read about how there was a SwitchTo
method that was removed. I can see why that could be dangerous, but why is there no equivalent of Task.Yield().ConfigureAwait(false)
?
Edit:
To provide further context for my question, here is some code. I am implementing an open source library for accessing DynamoDB (a distributed database as a service from AWS) that supports async. A number of operations return IAsyncEnumerable<T>
as provided by the IX-Async library. That library doesn't provide a good way of generating async enumerables from data sources that provide rows in "chunks" i.e. each async request returns many items. So I have my own generic type for this. The library supports a read ahead option allowing the user to specify how much data should be requested ahead of when it is actually needed by a call to MoveNext()
.
Basically, how this works is that I make requests for chunks by calling GetMore()
and passing along state between these. I put those tasks in a chunks
queue and dequeue them and turn them into actual results that I put in a separate queue. The NextChunk()
method is the issue here. Depending on the value of ReadAhead
I will keeping getting the next chunk as soon as the last one is done (All) or not until a value is needed but not available (None) or only get the next chunk beyond the values that are currently being used (Some). Because of that, getting the next chunk should run in parallel/not block getting the next value. The enumerator code for this is:
private class ChunkedAsyncEnumerator<TState, TResult> : IAsyncEnumerator<TResult>
{
private readonly ChunkedAsyncEnumerable<TState, TResult> enumerable;
private readonly ConcurrentQueue<Task<TState>> chunks = new ConcurrentQueue<Task<TState>>();
private readonly Queue<TResult> results = new Queue<TResult>();
private CancellationTokenSource cts = new CancellationTokenSource();
private TState lastState;
private TResult current;
private bool complete; // whether we have reached the end
public ChunkedAsyncEnumerator(ChunkedAsyncEnumerable<TState, TResult> enumerable, TState initialState)
{
this.enumerable = enumerable;
lastState = initialState;
if(enumerable.ReadAhead != ReadAhead.None)
chunks.Enqueue(NextChunk(initialState));
}
private async Task<TState> NextChunk(TState state, CancellationToken? cancellationToken = null)
{
await Task.Yield(); // ** causes deadlock
var nextState = await enumerable.GetMore(state, cancellationToken ?? cts.Token).ConfigureAwait(false);
if(enumerable.ReadAhead == ReadAhead.All && !enumerable.IsComplete(nextState))
chunks.Enqueue(NextChunk(nextState)); // This is a read ahead, so it shouldn't be tied to our token
return nextState;
}
public Task<bool> MoveNext(CancellationToken cancellationToken)
{
cancellationToken.ThrowIfCancellationRequested();
if(results.Count > 0)
{
current = results.Dequeue();
return TaskConstants.True;
}
return complete ? TaskConstants.False : MoveNextAsync(cancellationToken);
}
private async Task<bool> MoveNextAsync(CancellationToken cancellationToken)
{
Task<TState> nextStateTask;
if(chunks.TryDequeue(out nextStateTask))
lastState = await nextStateTask.WithCancellation(cancellationToken).ConfigureAwait(false);
else
lastState = await NextChunk(lastState, cancellationToken).ConfigureAwait(false);
complete = enumerable.IsComplete(lastState);
foreach(var result in enumerable.GetResults(lastState))
results.Enqueue(result);
if(!complete && enumerable.ReadAhead == ReadAhead.Some)
chunks.Enqueue(NextChunk(lastState)); // This is a read ahead, so it shouldn't be tied to our token
return await MoveNext(cancellationToken).ConfigureAwait(false);
}
public TResult Current { get { return current; } }
// Dispose() implementation omitted
}
I make no claim this code is perfect. Sorry it is so long, wasn't sure how to simplify. The important part is the NextChunk
method and the call to Task.Yield()
. This functionality is used through a static construction method:
internal static class AsyncEnumerableEx
{
public static IAsyncEnumerable<TResult> GenerateChunked<TState, TResult>(
TState initialState,
Func<TState, CancellationToken, Task<TState>> getMore,
Func<TState, IEnumerable<TResult>> getResults,
Func<TState, bool> isComplete,
ReadAhead readAhead = ReadAhead.None)
{ ... }
}
ConfigureAwait in Action You capture the current context before awaiting the task, leaving it to the task context, then recovering (re-entering) it back when the task completes.
yield() Suspends the current task and allows other tasks to execute.
This is what the ConfigureAwait method enables to do. Calling ConfigureAwait(false) after the task means that we do not care if the code after the await, runs on the captured context or not. In the output console, “True” will be printed since the synchronization context is not kept.
Remarks. You can use await Task. Yield(); in an asynchronous method to force the method to complete asynchronously. If there is a current synchronization context (SynchronizationContext object), this will post the remainder of the method's execution back to that context.
The exact equivalent of Task.Yield().ConfigureAwait(false)
(which doesn't exist since ConfigureAwait
is a method on Task
and Task.Yield
returns a custom awaitable) is simply using Task.Factory.StartNew
with CancellationToken.None
, TaskCreationOptions.PreferFairness
and TaskScheduler.Current
. In most cases however, Task.Run
(which uses the default TaskScheduler
) is close enough.
You can verify that by looking at the source for YieldAwaiter
and see that it uses ThreadPool.QueueUserWorkItem
/ThreadPool.UnsafeQueueUserWorkItem
when TaskScheduler.Current
is the default one (i.e. thread pool) and Task.Factory.StartNew
when it isn't.
You can however create your own awaitable (as I did) that mimics YieldAwaitable
but disregards the SynchronizationContext
:
async Task Run(int input)
{
await new NoContextYieldAwaitable();
// executed on a ThreadPool thread
}
public struct NoContextYieldAwaitable
{
public NoContextYieldAwaiter GetAwaiter() { return new NoContextYieldAwaiter(); }
public struct NoContextYieldAwaiter : INotifyCompletion
{
public bool IsCompleted { get { return false; } }
public void OnCompleted(Action continuation)
{
var scheduler = TaskScheduler.Current;
if (scheduler == TaskScheduler.Default)
{
ThreadPool.QueueUserWorkItem(RunAction, continuation);
}
else
{
Task.Factory.StartNew(continuation, CancellationToken.None, TaskCreationOptions.PreferFairness, scheduler);
}
}
public void GetResult() { }
private static void RunAction(object state) { ((Action)state)(); }
}
}
Note: I don't recommend actually using NoContextYieldAwaitable
, it's just an answer to your question. You should be using Task.Run
(or Task.Factory.StartNew
with a specific TaskScheduler
)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With