Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Benchmarking in Python: Why does my code run slower with repetition?

I have a simple Sieve of Eratosthanes implementation as follows:

# Generate all primes less than k
def sieve(k):
    s = [True] * k
    s[0] = s[1] = False
    for i in range(4, k, 2):
        s[i] = False

    for i in range(3, int(sqrt(k)) + 2, 2):
        if s[i]:            
            for j in range(i ** 2, k, i * 2):
                s[j] = False

    return [2] + [ i for i in range(3, k, 2) if s[i] ]

I am benchmarking this code by repeatedly generating primes under 10M:

st = time()
for x in range(1000):
    rt = time()
    sieve(10000000)
    print "%3d %.2f %.2f" % (x, time() - rt, (time() - st) / (x + 1))

I am confused, as the time taken per test run increases markedly:

run   t  avg
  0 1.49 1.49
  1 1.79 1.66
  2 2.23 1.85
  3 2.72 2.07
  4 2.67 2.20
  5 2.87 2.31
  6 3.05 2.42
  7 3.57 2.56
  8 3.38 2.65
  9 3.48 2.74
 10 3.81 2.84
 11 3.75 2.92
 12 3.85 2.99
 13 4.14 3.07
 14 4.02 3.14
 15 4.05 3.20
 16 4.48 3.28
 17 4.41 3.34
 18 4.19 3.39
 19 4.22 3.43
 20 4.65 3.49

However, changing every instance of range to xrange eliminates the issue:

run   t  avg
  0 1.26 1.26
  1 1.23 1.28
  2 1.24 1.27
  3 1.25 1.26
  4 1.23 1.26
  5 1.23 1.25
  6 1.25 1.25
  7 1.25 1.25
  8 1.23 1.25
  9 1.25 1.25
 10 1.24 1.25

Why is this the case? Is it really all GC overhead? 3x slow down after 20 runs seems like a lot...

like image 852
verdesmarald Avatar asked Sep 16 '12 18:09

verdesmarald


1 Answers

This is not (yet) an answer, but just a collection of organized experiments.

This is fascinating, really. It seems that there's some very dubious thing going on with Python's memory allocator.

Here's my attempt to reduce the testcase:

def sieve(k):
    s = [True] * k

    for i in xrange(3, int(sqrt(k)) + 2, 2):
        for j in range(i ** 2, k, i * 2):
            s[j] = False

    return [ i for i in range(3, k, 2) if s[i] ]

st = time()
for x in range(1000):
    rt = time()
    sieve(10000000)
    print "%3d %.2f %.2f" % (x, time() - rt, (time() - st) / (x + 1))

Note that if I remove if s[i], make the inner range an xrange, make the return value a generator, or pass in the inner for loop (or make it s[j] = True), the behaviour disappears and the times are flat.

The memory usage of Python increases steadily as the function runs, eventually reaching a plateau (at which point the running times start to plateau too, at about 250% of their initial values).

My hypothesis is that the large number of inner ranges (of decreasing size), plus the final array, cause some sort of worst-case heap fragmentation making it very hard to continue allocating objects.

My recommendation would be to make a reduced test case and file it as a bug with the Python developers (bugs.python.org).

like image 182
nneonneo Avatar answered Oct 20 '22 15:10

nneonneo