I am writing a little python script to test some stuff. later I want to use it to create resource usage plots with gnuplot, but first a few tests.
The script looks like
import subprocess
result = subprocess.check_output("top -b -n 1 -c", shell=True).split("\n")
head = result[:5]
body = [x for x in result[7:] if x] #removes empty strings
for line in head:
print line
csum = 0.0
for line in body:
print line
csum += float(line.split()[8])
print "CPU usage of all processes added up", csum, "%"
Running it multiple times almost always resulted in a shown CPU usage > 100%. Sometimes even > 200%. How can this be?
It runs in a virtual machine (virtualbox, ubuntu 14.04 64 bit) with two cores. The host also has two cores.
Shouldn't the sum of the usage values of all running processes be always lower than 100%? I am running htop at the same time and this shows me about 50% load on every core....
Could the problem be that maybe some processes started others and both are shown in the output of top while the parent process also shows the cpu usage of the child? ==> child gets counted twice?
a 100% cpu is a full utilization of 1 CPU/CORE/Thread. If you have, 8 CPU, then the maximum will be 800%.
If you have thread, the story is a bit more complicated, since a thread is not a real CPU, but, on Linux it is counted as a CPU.
In my experience I had an oversized SQL database that was causing problems. This was because it caused the mysqld (mysql daemon) to execute at over the capacity of the system memory causing the server to crash. When i tried the 'top' command in UNIX the system showed that the process was using over 100% of memory. Thus processes can utilise more than 100% of system memory.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With