I somehow get the following timestamp on my program. I understand if there's IO involved, real time can be larger than the sum of user time and system time, but how do you explain this when user time alone is larger than real time
real 0m8.512s user 0m8.737s sys 0m1.956s
The program is probably using multiple cores at some point. User time is summed over the cores that have been used, so e.g. using 100% of two cores for 1s makes for 2s user time.
It's because of multiple GC threads work concurrently to share the work load, thus real time will be less than user + sys time. Say user + sys time is 2 seconds. If 5 GC threads are concurrently working then real time should be some where in the neighbourhood of 400 milliseconds ( 2 seconds / 5 GC threads).
In brief, Real refers to actual elapsed time including other processes that may be running at the same time; User and Sys refer to CPU time used only by the process (here the df command).
3.1. These are executed in kernel mode. All other commands are executed in user mode. user time represents the time that a command has executed in user mode. In our case, that was precisely 0.063 seconds.
The program is probably using multiple cores at some point. User time is summed over the cores that have been used, so e.g. using 100% of two cores for 1s makes for 2s user time.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With