The following:
timeit print("foo")
returns something like: 100000 loops, best of 3: 2.35 µs per loop
. I guess the 100000
relates to number
argument of timeit
. I don't understand what is best of 3
means and what is the time unit in use? In this case it is probably microsecond, but I also saw us
and ns
as units; however, I could not find an explanation in the documentation.
I'm presuming you are talking about the Command-line interface here.
The command line interface first will try to find a suitable number of tests; it starts with 10 runs, scales that up by a factor of 10 until the total time taken exceeds 0.2 seconds. If it prints 10000 loops, then that was the number of loops that takes 0.2 seconds or more.
The 3
is the repeat count; the tests are repeated several times, with --repeat
defaulting to 3. The fastest time of those 3 is then taken.
Then the total time for the fastest run is divided by the test number. The number is presented either in seconds, milliseconds or microseconds, depending on how fast the loop was. There are 1000 microseconds in a millisecond, and 1000 milliseconds in a second, and the unit is printed as sec
, msec
and usec
, respectively.
IPython has their own %timeit
command based on all this, but they abbreviated further and use s
, ms
, us
and even ns
if your code was fast enough to require a nanosecond unit size. The us
microsecond unit can also be displayed as µs
if your terminal allows it.
You can take a look at the Python 3.4 implementation, and compare it with the iPython implentation (with the time display handled by the _format_time()
function.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With