I understand that with threadless async there are more threads available to service inputs (e.g. a HTTP request), but I don't understand how that doesn't potentially cause cause thread starvation when the async operations complete and a thread is needed to run their continuation.
Let's say we only have 3 threads
Thread 1 |
Thread 2 |
Thread 3 |
and they get blocked on long-running operations that require threads (e.g. make database query on separate db server)
Thread 1 | --- | Start servicing request 1 | Long-running operation .................. |
Thread 2 | ------------ | Start servicing request 2 | Long-running operation ......... |
Thread 3 | ------------------- | Start servicing request 3 | Long-running operation ...|
|
request 1
|
request 2
|
request 3
|
request 4 - BOOM!!!!
With async-await you can make this like
Thread 1 | --- | Start servicing request 1 | --- | Start servicing request 4 | ----- |
Thread 2 | ------------ | Start servicing request 2 | ------------------------------ |
Thread 3 | ------------------- | Start servicing request 3 | ----------------------- |
|
request 1
|
request 2
|
request 3
|
request 4 - OK
However, this seems to me like it could result in a surplus of async operations that are "in-flight" and if too many finish at the same time then there are no threads available to run their continuation.
Thread 1 | --- | Start servicing request 1 | --- | Start servicing request 4 | ----- |
Thread 2 | ------------ | Start servicing request 2 | ------------------------------ |
Thread 3 | ------------------- | Start servicing request 3 | ----------------------- |
|
request 1
|
request 2
|
request 3
|
request 4 - OK
| longer-running operation 1 completes - BOOM!!!!
The async and await keywords don't cause additional threads to be created. Async methods don't require multithreading because an async method doesn't run on its own thread. The method runs on the current synchronization context and uses time on the thread only when the method is active.
The fact that await frees the thread up to do other things means that it can remain responsive to additional user actions and input. But, even if there is no graphical user interface, we can see the advantage of freeing up a thread.
Using this technique we are always thread safe, and we can actually use a background context to retrieve all Core Data Entities in a background thread. The new get function is async so we need to use await in order to test it. If, in this Unit Test, we add the -com.
The async keyword turns a method into an async method, which allows you to use the await keyword in its body. When the await keyword is applied, it suspends the calling method and yields control back to its caller until the awaited task is complete. await can only be used inside an async method.
Suppose you have web application which handles a request with a very common flow:
IO in this case can be database query, socket read\write, file read\write and so on.
For an example of IO let's take file reading and some arbitrary but realistic timings:
Now suppose 100 requests come in with interval of 1ms. How many threads you will need to handle those requests without delay with synchronous processing like this?
public IActionResult GetSomeFile(RequestParameters p) {
string filePath = Preprocess(p);
var data = System.IO.File.ReadAllBytes(filePath);
return PostProcess(data);
}
Well, 100 threads obviously. Since file read takes 300ms in our example, when 100th request comes in - previous 99 are busy blocked by file reading.
Now let's "use async await":
public async Task<IActionResult> GetSomeFileAsync(RequestParameters p) {
string filePath = Preprocess(p);
byte[] data;
using (var fs = System.IO.File.OpenRead(filePath)) {
data = new byte[fs.Length];
await fs.ReadAsync(data, 0, data.Length);
}
return PostProcess(data);
}
How many threads are needed now to handle 100 requests without delay? Still 100. That's because file can be opened in "synchornous" and "asynchronous" modes, and by default it opens in "synchronous". That means even though you are using ReadAsync
- underlying IO is not asynchronous and some thread from a thread pool is blocked waiting for result. Did we achieve anything useful by doing that? In context of web applicaiton - not at all.
Now let's open file in "asynchronous" mode:
public async Task<IActionResult> GetSomeFileReallyAsync(RequestParameters p) {
string filePath = Preprocess(p);
byte[] data;
using (var fs = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.Read, 4096, FileOptions.Asynchronous)) {
data = new byte[fs.Length];
await fs.ReadAsync(data, 0, data.Length);
}
return PostProcess(data);
}
How many threads we need now? Now 1 thread is enough, in theory. When you open file in "asynchronous" mode - reads and writes will utilize (on windows) windows overlapped IO.
In simplified terms it works like this: there is a queue-like object (IO completion port) where OS can post notifications about completions of certain IO operations. .NET thread pool registers one such IO completion port. There is only one thread pool per .NET application, so there is one IO completion port.
When file is opened in "asynchronous" mode - it binds its file handle to this IO completion port. Now when you do ReadAsync
, while actual read is performed - no dedicated (for this specific operation) thread is blocked waiting for that read to complete. When OS notify .NET completion port that IO for this file handle has completed - .NET thread pool executes continuation on thread pool thread.
Now let's see how processing of 100 requests with 1ms interval can go in our scenario:
Request 1 goes in, we grab thread from a pool to execute 1ms pre-processing step. Then thread performs asynchronous read. It doesn't need to block waiting for completion, so it returns to the pool.
Request 2 goes in. We have a thread in a pool already which just completed pre-processing of request 1. We don't need an additional thread - we can use that one again.
Same is true for all 100 requests.
After handling pre-processing of 100 requests, there are 200ms until first IO completion will arrive, in which our 1 thread can do even more useful work.
IO completion events start to arrive - but our post-processing step is also very short (1ms). Only one thread again can handle them all.
This is an idealized scenario of course, but it shows how not "async await" but specifically asynchronous IO can help you to "save threads".
What if our post-processing step is not short but we instead decided to do heavy CPU bound work in it? Well, that will cause thread pool starvation. Thread pool will create new threads without delay, until it reaches configurable "low watermark" (which you can obtain via ThreadPool.GetMinThreads()
and change via ThreadPool.SetMinThreads()
). After that amount of threads is reached - thread pool will try to wait for one of the busy threads to become free. It will not wait forever of course, usually it will wait for 0.5-1 seconds and if no thread become free - it will create a new one. Still, that delay might slow your web application quite a bit in heavy load scenarios. So don't violate thread pool assumptions - don't run long CPU-bound work on thread pool threads.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With