I noticed by chance that a simple program generating a class from a large datafile ran a lot faster in Python 2.7 vs. 3.5. I read here that the use of "infinite precision" integers was to blame for slowdown in simple enumeration, but even when I tried a simple test instantiating this class I found that Python 3 was significantly slower:
class Benchmark(object):
def __init__(self):
self.members = ['a', 'b', 'c', 'd']
def test():
test = Benchmark()
if __name__ == '__main__':
import timeit
print(timeit.timeit("test()", setup="from __main__ import test"))
I thought perhaps it was something to do with the size of each class instance, but the Python 3 instance was smaller than 2 (56 vs. 64)
$python3 benchmarks.py
0.7017288669958361
$python benchmarks.py
0.508942842484
I have tried many variations on this theme, including with 3.4 on a different machine, and still get the same results. Any ideas what's going on?
You are not measuring class instantiation time, you are measuring class instantiation, plus assignment, plus list creation, ...
Here's a correct benchmark:
$ python -m timeit -s 'class C(object): pass' 'C()'
10000000 loops, best of 3: 0.0639 usec per loop
$ python3 -m timeit -s 'class C(object): pass' 'C()'
10000000 loops, best of 3: 0.0622 usec per loop
As you can see, Python 3 is sightly faster.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With