i was wondering what is the overhead of using the time command in unix.
i know how to use it, but i want to know how much longer the command
$ time java HelloWorld
takes on a terminal, than the command
$ java HelloWorld
I am specifically interested in how this overhead varies with the time duration of the program that is running.
context:: I am using it to measure the time taken for a bunch of long running experiments written in Java.
In computing, time is a command in Unix and Unix-like operating systems. It is used to determine the duration of execution of a particular command.
The output of the time command also includes the CPU seconds the executable uses in the user and the kernel modes. So, in the above example, the CPU time spent in the user mode was 0.001 seconds and the CPU time spent in the kernel mode was 0.002 seconds.
time command in Linux is used to execute a command and prints a summary of real-time, user CPU time and system CPU time spent by executing a command when it terminates.
You can use ' time ' command to get the execution time of the script. This command will call the Tcl interpreter count times to evaluate script (or once if count is not specified).
The overhead is fixed and, based on the source code, is only due to the fact that an extra process is being started (the time
process itself), introducing a small amount of extra processing (a). Normally, the shell would start your program but, in this case, the shell starts time
and time
starts your process (with a fork
).
This extra processing involves:
fork
and exec
the child.While the process being measured is running, time
itself is simply waiting for it to exit (with a wait
call) so has no impact on the process.
So, while the start-up time for the time
process is actually included in the measurements, these will only be significant for very short processes. If your process runs for an appreciable amount of time, the overhead of time
is irrelevant.
As to what I mean by appreciable, you can see the effect time
has by running it with a very fast executable, and also see if it has any appreciable increase in overhead for longer-running processes:
pax> time sleep 0
real 0m0.001s
user 0m0.000s
sys 0m0.000s
pax> time sleep 1
real 0m1.001s
user 0m0.000s
sys 0m0.000s
pax> time sleep 10
real 0m10.001s
user 0m0.000s
sys 0m0.004s
pax> time sleep 100
real 1m40.001s
user 0m0.000s
sys 0m0.000s
In other words, hardly any effect at all.
Now, since you're only likely to be timing processes if they're long-running (it's hard to care whether a single process takes one or two milliseconds unless you're running it many times in succession, in which case there are better ways to increase performance), the fixed overhead of time
gets less and less important.
(a): And, if you're using a shell with time
built in (such as bash
with its time
reserved word), even that small overhead disappears.
Overhead of time
should be fairly constant regardless of the program being timed. All it has to do is take a timestamp, run the program, take another timestamp and output a result.
In terms of accuracy etc: the shorter the program you are running is, the more impact time
will have on it. e.g. time on "Hello World" is probably not going to give you good results. time on something that runs for a decent period will be very accurate since time's overhead will be well down in the noise.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With