Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is the UNIX `time` command accurate enough for benchmarks? [closed]

Let's say I wanted to benchmark two programs: foo.py and bar.py.

Are a couple thousand runs and the respective averages of time python foo.py and time python bar.py adequate enough for profiling and comparing their speed?


Edit: Additionally, if the execution of each program was sub-second (assume it wasn't for the above), would time still be okay to use?

like image 716
chrisdotcode Avatar asked Jan 25 '12 16:01

chrisdotcode


People also ask

What does Time Command do in Unix?

In computing, time is a command in Unix and Unix-like operating systems. It is used to determine the duration of execution of a particular command.

How do I run a benchmark test in Linux?

Open a terminal in the GeekBench directory that you just unpacked, and run the binary to start your test. After the test, Geekbench will give you a URL to view your complete test results. The results are organized in a table, with your complete score on top.

How does Linux calculate execution time?

Use the built-in time keyword: $ help time time: time [-p] PIPELINE Execute PIPELINE and print a summary of the real time, user CPU time, and system CPU time spent executing PIPELINE when it terminates. The return status is the return status of PIPELINE.


2 Answers

time produces good enough times for benchmarks that run over one second otherwise the time it took exec()ing a process may be large compared to its run-time.

However, when benchmarking you should watch out for context switching. That is, another process may be using CPU thus contending for CPU with your benchmark and increasing its run time. To avoid contention with other processes you should run a benchmark like this:

sudo chrt -f 99 /usr/bin/time --verbose <benchmark> 

Or

sudo chrt -f 99 perf stat -ddd <benchmark> 

sudo chrt -f 99 runs your benchmark in FIFO real-time class with priority 99, which makes your process the top priority process and avoids context switching (you can change your /etc/security/limits.conf so that it doesn't require a privileged process to use real-time priorities).

It also makes time report all the available stats, including the number of context switches your benchmark incurred, which should normally be 0, otherwise you may like to rerun the benchmark.

perf stat -ddd is even more informative than /usr/bin/time and displays such information as instructions-per-cycle, branch and cache misses, etc.

And it is better to disable the CPU frequency scaling and boost, so that the CPU frequency stays constant during the benchmark to get consistent results.

like image 104
Maxim Egorushkin Avatar answered Sep 20 '22 21:09

Maxim Egorushkin


Nowadays, imo, there is no reason to use time for benchmarking purposes. Use perf stat instead. It gives you much more useful information and can repeat the benchmarking process any given number of time and do statistics on the results, i.e. calculate variance and mean value. This is much more reliable and just as simple to use as time:

perf stat -r 10 -d <your app and arguments> 

The -r 10 will run your app 10 times and do statistics over it. -d outputs some more data, such as cache misses.

So while time might be reliable enough for long-running applications, it definitely is not as reliable as perf stat. Use that instead.

Addendum: If you really want to keep using time, at least don't use the bash-builtin command, but the real-deal in verbose mode:

/usr/bin/time -v <some command with arguments> 

The output is then e.g.:

    Command being timed: "ls"     User time (seconds): 0.00     System time (seconds): 0.00     Percent of CPU this job got: 0%     Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.00     Average shared text size (kbytes): 0     Average unshared data size (kbytes): 0     Average stack size (kbytes): 0     Average total size (kbytes): 0     Maximum resident set size (kbytes): 1968     Average resident set size (kbytes): 0     Major (requiring I/O) page faults: 0     Minor (reclaiming a frame) page faults: 93     Voluntary context switches: 1     Involuntary context switches: 2     Swaps: 0     File system inputs: 8     File system outputs: 0     Socket messages sent: 0     Socket messages received: 0     Signals delivered: 0     Page size (bytes): 4096     Exit status: 0 

Especially note how this is capable of measuring the peak RSS, which is often enough if you want to compare the effect of a patch on the peak memory consumption. I.e. use that value to compare before/after and if there is a significant decrease in the RSS peak, then you did something right.

like image 45
milianw Avatar answered Sep 18 '22 21:09

milianw