Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is shuffling list(range(n)) slower than shuffling [0]*n?

Using random.shuffle, I noticed that shuffling list(range(n)) takes about 25% more time than shuffling [0] * n. Here are times for sizes n from 1 million to 2 million:

random.shuffle(mylist)

Why is shuffling list(range(n)) slower? Unlike for sorting a list (which needs to look at the objects) or copying a list (which increases reference counters inside the objects), the objects shouldn't matter here. This should just rearrange pointers inside the list.

I also tried numpy.random.shuffle, where shuffling list(range(n)) is three times (!) slower than shuffling [0] * n:

numpy.random.shuffle(mylist)

I also tried a third way to rearrange the elements in the list, namely list.reverse. Which, as expected, took equally long for both lists:

list.reverse(mylist)

Just in case the shuffled order matters, I also tried list.reverse after shuffling the lists. Again, as expected, it took equally long for both lists, and also equally long as without that prior shuffling:

list.reverse(mylist) after shuffling

So what's the difference? Both shuffling and reversing only need to rearrange pointers inside the list, why do the objects matter for shuffling but not for reversing?

My benchmark code producing the times:

import random
import numpy
from timeit import repeat, timeit
from collections import defaultdict

shufflers = {
    'random.shuffle(mylist)': random.shuffle,
    'numpy.random.shuffle(mylist)': numpy.random.shuffle,
    'list.reverse(mylist)': list.reverse,
    }

creators = {
    'list(range(n))': lambda n: list(range(n)),
    '[0] * n': lambda n: [0] * n,
    }

for shuffler in shufflers:
    print(shuffler)
    for creator in creators:
        print(creator)
        times = defaultdict(list)
        for _ in range(10):
            for i in range(10, 21):
                n = i * 100_000
                mylist = creators[creator](n)
                # Uncomment next line for pre-shuffling
                # numpy.random.shuffle(mylist)
                time = timeit(lambda: shufflers[shuffler](mylist), number=1)
                times[n].append(time)
                s = '%.6f ' * len(times[n])
        # Indent next line further to see intermediate results
        print([round(min(times[n]), 9) for n in sorted(times)])
like image 506
Kelly Bundy Avatar asked Mar 02 '23 16:03

Kelly Bundy


1 Answers

(note: I didn't have time to finish this answer, so here's a start -- this definitely doesn't fit in a comment, hopefully it can help someone else finish this out!)


this appears to be due to locality of reference (and perhaps a cpython implementation detail -- I don't see the same results in pypy for example)

a few data points before an attempt at explanation:

random.shuffle is implemented in pure python and works for any mutable sequence type -- it is not specialized for lists.

  • this means that every swap involves __getitem__, increasing the refcount of the item, __setitem__, decreasing the refcount of the item

list.reverse is implemented in C and only works for list (using implementation details of a list)

  • this means that every swap happens without invoking __getitem__ or changing reference counts. the internal items of the list are rearranged directly

the important bit being the reference counts

in cpython, the reference count is stored with the object itself, and nearly all objects are stored in the heap. in order to adjust the reference count (even temporarily) a write to the ob_refcnt will page in the PyObject structure into cache/memory/etc.

(here's where I ran out of time -- I would probably do some memory fault analysis to confirm this hypothesis)

like image 156
Anthony Sottile Avatar answered Mar 05 '23 15:03

Anthony Sottile