Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the difference between "Dispatch Latency" and "Context Switch" in operating systems?

I am currently studying operating systems from Silberschatz's book and have come across the "Dispatch Latency" concept. The book defines it as follows:

The time it takes for the dispatcher to stop one process and start another running is known as the dispatch latency.

Isn't this the same definition of "Context Switch"? Is there any difference between the two terms or are they interchangeable?

like image 244
Islam Hassan Avatar asked Oct 19 '18 17:10

Islam Hassan


People also ask

What is the difference between dispatcher and context switch?

The term dispatching is associated with scheduling and means roughly selecting the next task to run. So in a typical task switch, for example due to a timer interrupt, the context switcher first saves the context of the interrupted task, establishes the context to run system code and then calls the dispatcher.

What is dispatch latency in operating system?

The term dispatch latency describes the amount of time it takes for a system to respond to a request for a process to begin operation. With a scheduler written specifically to honor application priorities, real-time applications can be developed with a bounded dispatch latency.

What is context switch latency?

The time to switch between two separate processes is called the process switching latency. The time to switch between two threads of the same process is called the thread switching latency. The time from when a hardware interrupt is generated to when the interrupt is serviced is called the interrupt latency.

What is context switching in operating system?

An operating system uses this technique to switch a process between states to execute its functions through CPUs. It is a process of saving the context(state) of the old process(suspend) and loading it into the new process(resume). It occurs whenever the CPU switches between one process and another.


2 Answers

Let's try a "somewhat realistic" scenario and assume that a task previously used read() to fetch data from a pipe but there was no data at the time so the task was blocked; then something wrote data to the pipe causing the task to be unblocked again. In this scenario:

  • the scheduler does a task switch from "previous task running kernel code" to "task that was unblocked running kernel code". This might take 40 nanoseconds.
  • the kernel (now running in the context of the unblocked task) copies data into the buffer that was provided by the original read() call, and arranges parameters that the read() call is supposed to return (e.g. number of bytes read). This might take another 50 nanoseconds.
  • the kernel decides it has nothing better to do so it returns to user-space, taking another 10 nanoseconds.

Here, the context switch time would be 40 nanoseconds, but the dispatch latency (as defined by the book's author) would be 100 nanoseconds.

like image 181
Brendan Avatar answered Dec 10 '22 10:12

Brendan


"Context Switch" is a process. "Dispatch Latency" is a latency, a.k.a. time.

like image 22
bolov Avatar answered Dec 10 '22 10:12

bolov