To be honest, I don't know if there might be a solution to my question but I'd like to catch, in Swift, when a context switch is happening.
I was imaging a func which takes a long time in order to be completed such as a write operation on a remote server and I was thinking if there might be a way to understand when (which line at least) the thread which is executing that task is performing a context switch because another task waiting for a long time has to be executed.
Sorry if for you might seem a stupid question or if I made mistakes whilst trying to explain the above
EDIT:
I'm talking about context switches that are happening automatically requested by the scheduler.. So imagine again we are in the middle of this long function which does tons of operations and the scheduler gave this task an amount of seconds, for example 10 seconds in order to make it complete. If the process runs out of time and doesnt end the task, it will be suspended and for example the thread will execute another task of another process. When it ends, scheduler might think to give another try to the suspended job and the execution will be resumed starting where it has been suspended ( so will read the value from PC register and will keep going on )
Calculating context switch time If all the processes' total execution time was T, then the context switch time = T – (SUM for all processes (waiting time + execution time)).
A context switch occurs when the kernel transfers control of the CPU from an executing process to another that is ready to run. The kernel first saves the context of the process. The context is the set of CPU register values and other data that describes the process' state.
Context Switching gets triggered during multiprocessing, interrupt handling, and switching from user mode to kernel mode. During a context switch, the data and state of the old process are stored in PCB, and the CPU is allotted to the new process.
A context switch is a procedure that a computer's CPU (central processing unit) follows to change from one task (or process) to another while ensuring that the tasks do not conflict. Effective context switching is critical if a computer is to provide user-friendly multitasking.
You can absolutely get what you've asked for, but I have a feeling it's going to be much less useful to you than you may believe. The vast (vast, vast!) majority of context switches are going to occur at points that you probably would think of as "uninteresting." lstat64()
, mach_vm_map_trap()
, mach_msg_trap()
, mach_port_insert_member_trap()
, kevent_id()
, the list goes on. Most threads spend most of their time deep in the OS stack. "A write operation on a remote server" isn't going to block after it takes some long period of time. It's going to proactively block itself because it knows it's going to take a (mind-bogglingly) long period of time.
Even so, you can certainly explore this with Instruments. Just choose the System Trace profile and it'll show you all the threads and the system cores and how your threads and all the other threads on the device are getting scheduled, every system call, etc etc etc. It's a huge amount of information, so you usually only profile for a few seconds at a time. But it'll look something like this:
This is useful information if you're at the point where context switches are a major bottleneck. This might happen if you're dealing with excessive lock contention, or if you're thrashing your L1 cache because you keep getting interrupted by some other thread. So if you have some thread that you expect to stay running pretty continuously, and it's getting blocked, this is really valuable information. Or if you have two threads that you think should work back and forth smoothly, but they seem to be fighting (switching rapidly), then that's something you could work on. (But this is rarely one of the first places you'd look for performance tuning unless you're working on quite low-level code.)
From your description, I think you may have the wrong idea about the scheduler. Nothing in the scheduler is going to be on the order of 10 seconds. In the scheduler world, milliseconds are a lot of time. You should be thinking about things that take microseconds and even nanoseconds. If you're working on code that assumes fetching data from RAM is free, then you're on the wrong time scale. A network call is so ludicrously slow that you can basically estimate it as "forever." At the context-switch level you're looking at events like:
00:00.770.247 Virtual memory Zero Fill took 10.50 µs small_free_list_add_ptr ← (31 other frames)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With