I do not see the rationale why python's timeit module measures the time using the best of 3
. Here is an example from my console:
~ python -m timeit 'sum(range(10000))'
10000 loops, best of 3: 119 usec per loop
Intuitively, I would have put the whole time together then divide it by the number of loops. What is the intuition of picking up the best of 3 among all loops? It seems just a bit unfair.
As noted in the documentation:
default_timer() measurations can be affected by other programs running on the same machine, so the best thing to do when accurate timing is necessary is to repeat the timing a few times and use the best time. The -r option is good for this; the default of 3 repetitions is probably enough in most cases.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With