Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Context Switches on Sleeping/Waiting Threads

I'm trying to understand how operating systems handle context switching in different models to better understand why NIO performance is better in cases of large peaks in the number of requests. Apart from the fact that there may be a limit to the number of threads, I'm curious how blocking operations being done in those large number of requests can affect resource utilization.

In a one request per thread model, say a servlet 2.5 based web application, if 499 threads are waiting for database IO and only one thread needs work, does the OS context switch between all of those 500 threads trying to find the one that needs work? To perform a context-switch, the operating system has to store the current thread's state, and restore the next thread's state. After doing so, the OS will find that it doesn't need any CPU time and will keep context switching until it finds the thread that needs work. Also, what does this look like in terms of server utilization? Is the CPU low as it's mostly just bound by the IO cost of swapping contexts in and out instead of actually computing anything?

Thanks in advance for any help. If you can point me in the direction of books, textbooks etc I would really appreciate that as well.

like image 856
JasonG Avatar asked Feb 08 '15 22:02

JasonG


1 Answers

If 499 threads are waiting for database IO and only one thread needs work, does the OS context switch between all of those 500 threads trying to find the one that needs work?

Not if the OS's scheduler has a sane design; iterating over all of the system's threads all of the time would be terribly inefficient.

Instead, most scheduler implementations keep a list of sleeping/blocked threads and a separate list of "ready-to-run" threads. When an event occurs that is supposed to wake up a thread (e.g. incoming data becomes available on a socket or file-handle, or a mutex that the thread was blocked on is released), the OS moves that thread from the sleeping/blocked-threads-list to the ready-threads-list. Then, when it is time to perform a context switch, the OS chooses a thread from the ready-threads-list, loads in that thread's context, and starts running it. In any modern/popular OS, the size of the sleeping/blocked-threads-list will have no impact at all on the time it takes the scheduler to select a thread from the ready-threads-list to run. (the size of the ready-threads-list might have an impact, under some OS's, but some schedulers are designed so that even a system with many ready-threads won't cause the scheduler to become less efficient)

Is the CPU low as it's mostly bound by the IO cost of swapping contexts in and out instead of actually computing anything?

Assuming you haven't run out of RAM, there is no I/O involved in switching thread contexts; context-switching involves the CPU and RAM only. If the CPU usage is low, the most likely reason is that your threads' algorithms themselves are I/O bound (e.g. most everything is waiting on your network card or hard drive to read or write data, most of the time). If your threads don't actually do any I/O, and you're still I/O bound, that might be a sign that your computer has used up all of its available RAM and is thrashing -- not a good state to be in.

like image 197
Jeremy Friesner Avatar answered Oct 24 '22 10:10

Jeremy Friesner