Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How expensive are kernel context switches compared to userspace context switches?

According to C10k and this paper, throughput of 1-thread-per-connection servers degrade as more and more clients connect and more and more threads are created. According to those two sources, this is because the more threads exist, the more time is spent on context switching compared to actual work done by those threads. Evented servers don't seem to suffer as much from performance degredation at high connection counts.

However, evented servers also do context switches between clients, they just do it in userspace.

  • Why are these userspace context switches faster than kernel thread context switches?
  • What exactly does a kernel context switch do that's so much more expensive?
  • How expensive is a kernel context switch exactly? How much time does it take?
  • Does kernel context switching time depend on the number of threads?

I'm mostly interested in how the Linux kernel handles context switching but information about other OSes is welcome too.

like image 304
Hongli Avatar asked Aug 07 '11 12:08

Hongli


People also ask

How expensive is a context switch?

In general, the indirect cost of context switch ranges from several microseconds to more than one thousand microseconds for our workload. When the overall data size is larger than cache size, the overhead of refilling of L2 cache have substantial impact on the cost of context switch.

Why is thread context switch expensive?

Context Switch is very expensive. Not because of the CPU operation itself, but because of cache invalidation. If you have an intensive task running, it will fill the CPU cache, both for instructions and data, also the memory prefetch, TLB and RAM will optimize the work toward some areas of ram.

Why are frequent context switches expensive in terms of system performance?

Context switching itself has a cost in performance, due to running the task scheduler, TLB flushes, and indirectly due to sharing the CPU cache between multiple tasks.

Why is context switching between threads cheaper?

Thread switching is a type of context switching from one thread to another thread in the same process. Thread switching is very efficient and much cheaper because it involves switching out only identities and resources such as the program counter, registers and stack pointers.


1 Answers

  • Why are these userspace context switches faster than kernel thread context switches?

Because the CPU does not need to switch to kernel mode and back to user mode.

  • What exactly does a kernel context switch do that's so much more expensive?

Mostly the switch to kernel mode. IIRC, the page tables are the same in kernel mode and user mode in Linux, so at least there is no TLB invalidation penalty.

  • How expensive is a kernel context switch exactly? How much time does it take?

Needs to be measured and can vary from machine to machine. I guess that a typical desktop/server machine these days can do a few hundred thousands of context switches per second, probably a few million.

  • Does kernel context switching time depend on the number of threads?

Depends on how the kernel scheduler handles this. AFAIK, in Linux it is pretty efficient, even with large thread counts, but more threads means more memory usage means more cache pressure and thus likely lower performance. I also expect some overhead involved in the handling of thousands of sockets.

like image 104
Ringding Avatar answered Sep 27 '22 17:09

Ringding