I'm just trying to clear up some concepts here. If anyone is willing to share their expertise on this matter, it's greatly appreciated it.
The following is my understanding of how IIS works in relation to threads, please correct me if I'm wrong.
As I understand it, for IIS 6.0 (I'll leave IIS 7.0 for now), web browser makes a request, gets pick up by HTTP.sys kernel driver, HTTP.sys hands it over to IIS 6.0's threadpool (I/O thread?) and such free up itself.
IIS 6.0's thread in returns hands it over to ASP.NET, which returns a temporary HSE_STATUS_PENDING to IIS 6.0, such frees up the IIS 6.0 thread and then forward it to a CLR Thread.
When ASP.NET picks up a free thread in the CLR threadpool, it executes the request. If there are no available CLR threads, it gets queued up in the application level queue (which has bad performance)
So based on the previous understanding, my questions are the following.
In synchronous mode, does that mean 1 request per 1 CLR thread?
*) If so, how many CONCURRENT requests can be served on a 1 CPU? Or should I ask the reverse? How may CLR threads are allowed per 1 CPU? Say, 50 CLR threads are allowed, does that mean then it's limited to serve 50 requests at any given time? Confused.
If I set the "requestQueueLimit" in "processModle" configuration to 5000, what does it mean really? You can queue up 5000 requests in the application queue? Isn't that really bad? Why would you ever set it so high since application queue has bad performance?
If you are programming asynchronous page, exactly where it starts to get the benefit in the above process?
I researched and see that by default, IIS 6.0's threadpool size is 256. 5000 concurrent requests comes in, handled by 256 IIS 6.0 threads and then each of the 256 threads, hands it off to CLR threads which i'm guessing is even lower by default. isn't that itself asynchronous? A bit confused. In addition, where and when does the bottleneck start to show up in synchronous mode? and in asynchronous mode? (Not sure if I'm making any sense, I'm just confused).
What happens when IIS threadpool (all 256 of them) are busy?
What happens when all CLR threads are busy? (I assume then, all requests are queued up in the application level queue)
What happens when application queue is over the requestQueueLimit?
Thanks a lot for reading, greatly appreciate your expertise on this matter.
You're pretty spot-on with the handoff process to the CLR, but here's where things get interesting:
If every step of the request is CPU-bound/otherwise synchronous, yes: that request will suck up that thread for its lifetime.
However, if any part of the request processing tasks out to anything asynchronous or even anything I/O related outside of purely managed code (db connection, file read/write, etc), it is possible, if not probable, that this will happen:
Request comes into CLR-land, picked up by thread A
Request calls out to filesystem
Under the hood, the transition to unmanaged code happens at some level which results in an IO completion port thread (different than a normal thread pool thread) being allocated in a callback-like manner.
Once that handoff occurs, Thread A returns back to the thread pool, where it is able to service requests.
Once the I/O task completes, execution is re-queued, and let's say Thread A is busy - Thread B picks up the request.
This sort of "fun" behavior is also called "Thread Agility", and is one reason to avoid using ANYTHING that is Thread Static in an ASP.NET application if you can.
Now, to some of your questions:
The request queue limit is the number of requests that can be "in line" before requests start getting flat-out dropped. If you had, say, an exceptionally "bursty" application, where you may get a LOT of very short lived requests, setting this high would prevent dropped requests, since they would bunch up in the queue, but drain equally as quickly.
Asynchronous handlers allow you to create the same "call me when you're done" type of behavior that the above scenario has; for example, if you say needed to make a web service call, calling that synchronously via say some HttpWebRequest call would by default block until completion, locking up that thread until it was done. Calling the same service asynchronously (or an asynchronous handler, any Begin/EndXXX pattern...) allows you some control over who actually gets tied up - your calling thread can continue along performing actions until that web service returns, which might actually be after the request has completed.
One thing to note is there is but one ThreadPool - all non-IO threads are pulled from there, so if you move everything to asynchronous processing, you may just bite yourself by exhausting your threadpool doing background work, and not servicing requests.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With