On my desktop, I have a little widget that tells me my current CPU usage. It also shows the usage for each of my two cores.
I always wondered, how does the CPU calculate how much of its processing power is being used? Also, if the CPU is hung up doing some intense calculations, how can it (or whatever handles this activity) examine the usage, without getting hung up as well?
So, the total CPU usage of an application is the sum of utime and stime, divided by the elapsed time. To get the elapsed time since the process started, we can use the process's start time. This value is the 22nd column from the /proc/<pid>/stat file and is called starttime.
CPUs are designed to run safely at 100% CPU utilization. However, you'll want to avoid these situations whenever they cause perceptible slowness in games.
If your CPU use temporarily spikes to 90% or 100%, that's normal if you're doing intensive tasks like high-end gaming or graphic design. So long as your CPU calms down after you're done, there's nothing to worry about.
There's a special task called the idle task that runs when no other task can be run. The % usage is just the percentage of the time we're not running the idle task. The OS will keep a running total of the time spent running the idle task:
If we take two samples of the running total n seconds apart, we can calculate the percentage of those n seconds spent running the idle task as (second sample - first sample)/n
Note that this is something the OS does, not the CPU. The concept of a task doesn't exist at the CPU level! (In practice, the idle task will put the processor to sleep with a HLT instruction, so the CPU does know when it isn't being used)
As for the second question, modern operating systems are preemptively multi-tasked, which means the OS can switch away from your task at any time. How does the OS actually steal the CPU away from your task? Interrupts: http://en.wikipedia.org/wiki/Interrupt
The CPU doesn't do the usage calculations by itself. It may have hardware features to make that task easier, but it's mostly the job of the operating system. So obviously the details of implementations will vary (especially in the case of multicore systems).
The general idea is to see how long is the queue of things the CPU needs to do. The operating system may take a look at the scheduler periodically to determine the number of things it has to do.
This is a function Linux in (ripped from Wikipedia) that performs said calculation:
#define FSHIFT 11 /* nr of bits of precision */ #define FIXED_1 (1<<FSHIFT) /* 1.0 as fixed-point */ #define LOAD_FREQ (5*HZ) /* 5 sec intervals */ #define EXP_1 1884 /* 1/exp(5sec/1min) as fixed-point */ #define EXP_5 2014 /* 1/exp(5sec/5min) */ #define EXP_15 2037 /* 1/exp(5sec/15min) */ #define CALC_LOAD(load,exp,n) \ load *= exp; \ load += n*(FIXED_1-exp); \ load >>= FSHIFT; unsigned long avenrun[3]; static inline void calc_load(unsigned long ticks) { unsigned long active_tasks; /* fixed-point */ static int count = LOAD_FREQ; count -= ticks; if (count < 0) { count += LOAD_FREQ; active_tasks = count_active_tasks(); CALC_LOAD(avenrun[0], EXP_1, active_tasks); CALC_LOAD(avenrun[1], EXP_5, active_tasks); CALC_LOAD(avenrun[2], EXP_15, active_tasks); } }
As for the second part of your question, most modern operating systems are multi-tasked. That means the OS is not going to let programs take up all the processing time and not have any for itself (unless you make it do that). In other words, even if an application appears hung, the OS can still steal some time away for its own work.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With