Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

I/O Completion Ports vs. RegisterWaitForSingleObject?

What's the difference between using I/O completion ports, versus just using RegisterWaitForSingleObject to have a thread pool thread wait for I/O to complete?

Is one of them faster, and if so, why?

like image 657
user541686 Avatar asked Feb 16 '23 16:02

user541686


1 Answers

IOCP's are generally the fastest performing IO turn-around mechanism you will find for one reason above all else: blocking detection.

The simple example of this is a server that is responsible for serving up files from a disk. An IOCP is generally made up of three primary things:

  1. The pool of N threads for servicing the IOCP requests.
  2. A limit of M threads (M is always < N) the tells the IOCP how many concurrent, non-blocked threads to allow.
  3. A completion-status loop that all threads run on.

The difference between N and M in this is very important. The general philosophy is to configure M to be the number of cores on the machine, and N to be larger. How much larger depends on the amount of time your worker threads spend in a blocked-state. If you're reading disk files, your threads will be bound to the speed of the disk IO channel. When you make that call to ReadFile() you've just introduced a blocking call. If M == N, then as soon as you hit all threads reading disk files, you're utterly stalled, with all threads on the disk IO channel.

But what if there was a way for some fancy scheduler to "know" that this thread is (a) participating in an IOCP thread pool, and (b) just stalled because it issued an API call that will be time consuming? What if, when that happens, that fancy scheduler could temporarily "move" that thread into a special "running-but-stalled" group, and then "release" an extra thread that has volunteered to work while there are threads stalled?

That is exactly what IOCP brings. When N is greater than M, The IOCP will put the thread that just issued the stall into a special running-but-stalled state, and then temporarily "borrow" an additional thread from your pool of N. It will continue to do this until the N pool is exhausted, or threads that were stalled begin returning from their blocking requests.

So under that light, an IOCP configured to have, say 8 threads concurrently running on an 8-core machine could actually have a few hundred threads in the real pool. Only 8 will ever be "allowed" to be concurrently running in non-blocked state, though you may pop over that temporarily when blocked threads return from their blocks and you already have borrowed threads servicing additional requests.

Finally, though not as important for your cause, it is still important: An IOCP thread will NOT block, nor context switch, if there is pending work on the queue when it finishes its current work and issues its next GetQueueCompletionStatus() call. If there is work waiting, it will pick it up and continue executing with no mandated preemption. Of course the OS scheduler may preempt anyway, but only as part of the general scheduler; not because of the specific call to GetQueueCompletionStatus(). The lone exception to this is if there are already over M threads running and non-blocked. In that case, GetQueueCompletionStatus() will block the calling thread until it is needed again for slack-work when enough threads once-again become blocked.

The description you gave indicates you will be heavily disk-io-bound. For absolute performance-critical io-server architectures, it is near-impossible to beat the benefits of IOCP, especially the OS-level block-detection that allows the scheduler to know it can temporarily release extra threads from your master-pool to keep things pumping while other threads are stalled.

You simply cannot replicate that specific feature of IOCPs using Windows thread pools. If all of your threads were number crunchers with little or no IO, I would say thread-pools would be a better fit, but your specificity of disk-IO tells me you should be using an IOCP instead.

like image 122
WhozCraig Avatar answered Feb 20 '23 02:02

WhozCraig