On a bare metal system (embedded microcontroller, no MMU, no paging) what is more expensive? A full context switch (register save & restore) or a function call (activation record allocation)?
I understand that this is highly dependent on calling convention and hardware capability, but how would I go about evaluating this?
EDIT:
To provide more context, I'm trying to model two scheduling schemes. The first being a pre-emptive scheduler with context switching between tasks. The second being a function pointer run queue where tasks are state-machines broken into several enque-able function calls (where enqueing occurs on an IO event driven basis).
For the most part, I can gather good data on how long my tasks take (both IO and CPU time) but I need some help figuring out the additional overhead costs to add as constants in my model.
Performing a context switch is expensive. For example, the OS has to save the register state of the previously-running thread, update kernel metadata belonging to the scheduler, and restore the register state of the next thread to run.
Research shows that, on average, those who context switch experience a 40% decrease in productivity compared to those who don't. Let me put that into perspective. Lost productivity due to context switching costs the global economy an estimated $450 billion annually. That's more than the GDP of most countries!
When we switch between two threads, on the other hand, it is not needed to invalidate the TLB because all threads share the same address space, and thus have the same contents in the cache. Thus the cost of switching between threads is much smaller than the cost of switching between processes.
The main advantage of context switching is even if the system contains only one CPU, it gives the user an illusion that the system has multiple CPUs due to which multiple processes are being executed. The context switching is so fast that the user won't even realize that the processes are switched to and fro.
Since the system calls that trigger context switches are function calls, and the hardware interrupts that can trigger context-switches are similar, (and require a call to an event/semaphore, and a jump/call to the scheduler entry point, to signal the context-switch), I would say that a function call would be cheaper CPU-cycle wise unless an unreasonable number of parameters were passed.
This smells like an XY problem - why do you ask this? Context switches and function calls are almost orthogonal - one is a stack-based mechanism, the other selects a different stack entirely.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With