Many a times i read/hear the argument that making a lot of system calls etc would be inefficient since the application make a mode switch i.e goes from user mode to kernel mode and after executing the system call starts executing in the user mode by making a mode switch again.
My question is what is the overhead of a mode switch ? Does cpu cache gets invalidated or tlb entries are flushed out or what happens that causes overhead ?
Please note that i am asking about the overhead involved in mode switch and not context switch. I know that mode switch and context switch are two different things and i am fully aware about overhead associated with a context switch, but what i fail to understand is what overhead is caused by a mode switch ?
If its possible please provide some information about a particular *nix platform like Linux, FreeBSD, Solaris etc.
Regards
lali
Context Switching leads to an overhead cost because of TLB flushes, sharing the cache between multiple tasks, running the task scheduler etc. Context switching between two threads of the same process is faster than between two different processes as threads have the same virtual memory maps.
Mode switch changes the process privileges between user and kernel modes. A mode switch occurs when a process requires accessing a system resource. It happens using the system call interface or by an interrupt. System call allows user mode process to call a kernel function from user mode.
Mode Switch is a rate control feature designed to prevent the tracking of paroxysmal atrial tachycardias. This is performed by placing the device in DDIR mode until the episode is over, preventing a rapid ventricular paced rate in response to the rapid atrial rate.
A user process undergoes a mode switch when it needs access to system resources. This is implemented through the system call interface or by interrupts such as page faults.
There should be no CPU cache or TLB flush on a simple mode switch.
A quick test tells me that, on my Linux laptop it takes about 0.11 microsecond for a userspace process to complete a simple syscall that does an insignificant amount of work other than the switch to kernel mode and back. I'm using getuid(), which only copies a single integer from an in-memory struct. strace
confirms that the syscall is repeated MAX times.
#include <unistd.h>
#define MAX 100000000
int main() {
int ii;
for (ii=0; ii<MAX; ii++) getuid();
return 0;
}
This takes about 11 seconds on my laptop, measured using time ./testover
, and 11 seconds divided by 100 million gives you 0.11 microsecond.
Technically, that's two mode switches, so I suppose you could claim that a single mode switch takes 0.055 microseconds, but a one-way switch isn't very useful, so I'd consider the there-and-back number to be the more relevant one.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With