I have a function f(x)
that takes as input a list x
of 100 random floats between 0 and 1. Different lists will result in different running times of f
.
I want to find out how long f
takes to run on average, over a large number of different random lists. What's the best way to do this? Should I use timeit
and if so is there a way I can do this without including the time it takes to generate each random list in each trial?
This is how I would do it without timeit
(pseudocode):
for i = 1 to 10000:
x = random list
start = current time
f(x)
end = current time
results.append(end - start)
return mean(results)
You can make a timer decorator:
Here is some example code:
from time import time
class Timer(object):
def __init__(self, func):
"""
Decorator that times a function
@param func: Function being decorated
@type func: callable
"""
self.func = func
def __call__(self, *args, **kwargs):
start = time()
self.func(*args, **kwargs)
end = time()
return end - start
@Timer
def cheese():
for var in xrange(9999999):
continue
for var in xrange(100):
print cheese()
Working example, with fewer loops.
import timeit, random
def summer(myList):
result = 0
for num in myList:
result += num
return result
for i in range(10):
x = [random.randint(0, 100) for i in range(100000)]
print timeit.timeit("summer(x)", setup="from __main__ import x, summer", number = 100)
You can import the variable using from __main__ import x
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With