Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

List comprehension vs generator expression's weird timeit results?

I was answering this question, I preferred generator expression here and used this, which I thought would be faster as generator doesn't need to create the whole list first:

>>> lis=[['a','b','c'],['d','e','f']] >>> 'd' in (y for x in lis for y in x) True 

And Levon used list comprehension in his solution,

>>> lis = [['a','b','c'],['d','e','f']] >>> 'd' in [j for i in mylist for j in i] True 

But when I did the timeit results for these LC was faster than generator:

~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f']]" "'d' in (y for x in lis for y in x)"     100000 loops, best of 3: 2.36 usec per loop ~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f']]" "'d' in [y for x in lis for y in x]"     100000 loops, best of 3: 1.51 usec per loop 

then I increased the size of list, and timed it again:

lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]] 

This time for searching 'd' generator was faster than LC, but when I searched a middle element(11) and the last element then LC again beats the generator expression, and I can't understand why?

~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "'d' in (y for x in lis for y in x)"     100000 loops, best of 3: 2.96 usec per loop  ~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "'d' in [y for x in lis for y in x]"     100000 loops, best of 3: 7.4 usec per loop  ~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "11 in [y for x in lis for y in x]" 100000 loops, best of 3: 5.61 usec per loop  ~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "11 in (y for x in lis for y in x)" 100000 loops, best of 3: 9.76 usec per loop  ~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "18 in (y for x in lis for y in x)" 100000 loops, best of 3: 8.94 usec per loop  ~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "18 in [y for x in lis for y in x]" 100000 loops, best of 3: 7.13 usec per loop 
like image 533
Ashwini Chaudhary Avatar asked Aug 15 '12 04:08

Ashwini Chaudhary


People also ask

Are list comprehensions memory efficient than generator comprehensions?

So what's the difference between Generator Expressions and List Comprehensions? The generator yields one item at a time and generates item only when in demand. Whereas, in a list comprehension, Python reserves memory for the whole list. Thus we can say that the generator expressions are memory efficient than the lists.

What is difference between list comprehension Dictionary comprehensions and generator?

The only difference between Generator Comprehension and List Comprehension is that the former uses parentheses.

Are list comprehensions more efficient?

Conclusions. List comprehensions are often not only more readable but also faster than using “for loops.” They can simplify your code, but if you put too much logic inside, they will instead become harder to read and understand.

Are list comprehensions better than for loops?

Because of differences in how Python implements for loops and list comprehension, list comprehensions are almost always faster than for loops when performing operations.


2 Answers

Completely depends on the data.

Generators have a fixed setup time that must be amortized over how many items are called; List comprehensions are faster initially but will slow substantially as more memory is used with larger data sets.

Recall that as cPython lists are expanded, the list is resized in growth pattern of 4, 8, 16, 25, 35, 46, 58, 72, 88,.... For larger list comprehensions, Python may be allocating up to 4x more memory than the size of your data. Once you hit the VM --- really sloowww! But, as stated, list comprehensions are faster than generators for small data sets.

Consider case 1, a 2x26 list of lists:

LoL=[[c1,c2] for c1,c2 in zip(string.ascii_lowercase,string.ascii_uppercase)]    def lc_d(item='d'):     return item in [i for sub in LoL for i in sub]  def ge_d(item='d'):     return item in (y for x in LoL for y in x)      def any_lc_d(item='d'):     return any(item in x for x in LoL)      def any_gc_d(item='d'):     return any([item in x for x in LoL])       def lc_z(item='z'):     return item in [i for sub in LoL for i in sub]  def ge_z(item='z'):     return item in (y for x in LoL for y in x)      def any_lc_z(item='z'):     return any(item in x for x in LoL)      def any_gc_z(item='z'):     return any([item in x for x in LoL])                 cmpthese.cmpthese([lc_d,ge_d,any_gc_d,any_gc_z,any_lc_d,any_lc_z, lc_z, ge_z])    

Results in these timings:

         rate/sec   ge_z   lc_z   lc_d any_lc_z any_gc_z any_gc_d   ge_d any_lc_d ge_z      124,652     -- -10.1% -16.6%   -44.3%   -46.5%   -48.5% -76.9%   -80.7% lc_z      138,678  11.3%     --  -7.2%   -38.0%   -40.4%   -42.7% -74.3%   -78.6% lc_d      149,407  19.9%   7.7%     --   -33.3%   -35.8%   -38.2% -72.3%   -76.9% any_lc_z  223,845  79.6%  61.4%  49.8%       --    -3.9%    -7.5% -58.5%   -65.4% any_gc_z  232,847  86.8%  67.9%  55.8%     4.0%       --    -3.7% -56.9%   -64.0% any_gc_d  241,890  94.1%  74.4%  61.9%     8.1%     3.9%       -- -55.2%   -62.6% ge_d      539,654 332.9% 289.1% 261.2%   141.1%   131.8%   123.1%     --   -16.6% any_lc_d  647,089 419.1% 366.6% 333.1%   189.1%   177.9%   167.5%  19.9%       -- 

Now consider case 2, that show wide disparity between a LC and gen. In this case, we are looking for one element in a 100 x 97 x 97 list of lists kind of structure:

LoL=[[str(a),str(b),str(c)]         for a in range(100) for b in range(97) for c in range(97)]  def lc_10(item='10'):     return item in [i for sub in LoL for i in sub]  def ge_10(item='10'):     return item in (y for x in LoL for y in x)      def any_lc_10(item='10'):     return any([item in x for x in LoL])      def any_gc_10(item='10'):     return any(item in x for x in LoL)       def lc_99(item='99'):     return item in [i for sub in LoL for i in sub]  def ge_99(item='99'):     return item in (y for x in LoL for y in x)      def any_lc_99(item='99'):     return any(item in x for x in LoL)      def any_gc_99(item='99'):     return any([item in x for x in LoL])        cmpthese.cmpthese([lc_10,ge_10,any_lc_10,any_gc_10,lc_99,ge_99,any_lc_99,any_gc_99],c=10,micro=True)    

Results in these times:

          rate/sec  usec/pass       ge_99      lc_99      lc_10  any_lc_99  any_gc_99  any_lc_10   ge_10 any_gc_10 ge_99            3 354545.903          --     -20.6%     -30.6%     -60.8%     -61.7%     -63.5% -100.0%   -100.0% lc_99            4 281678.295       25.9%         --     -12.6%     -50.6%     -51.8%     -54.1% -100.0%   -100.0% lc_10            4 246073.484       44.1%      14.5%         --     -43.5%     -44.8%     -47.4% -100.0%   -100.0% any_lc_99        7 139067.292      154.9%     102.5%      76.9%         --      -2.4%      -7.0% -100.0%   -100.0% any_gc_99        7 135748.100      161.2%     107.5%      81.3%       2.4%         --      -4.7% -100.0%   -100.0% any_lc_10        8 129331.803      174.1%     117.8%      90.3%       7.5%       5.0%         -- -100.0%   -100.0% ge_10      175,494      5.698  6221964.0% 4943182.0% 4318339.3% 2440446.0% 2382196.2% 2269594.1%      --    -38.5% any_gc_10  285,327      3.505 10116044.9% 8036936.7% 7021036.1% 3967862.6% 3873157.1% 3690083.0%   62.6%        -- 

As you can see -- it depends and it is a tradeoff...

like image 27
the wolf Avatar answered Sep 22 '22 13:09

the wolf


Expanding on Paulo's answer, generator expressions are often slower than list comprehensions because of the overhead of function calls. In this case, the short-circuiting behavior of in offsets that slowness if the item is found fairly early, but otherwise, the pattern holds.

I ran a simple script through the profiler for a more detailed analysis. Here's the script:

lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],      [7,8,9],[10,11,12],[13,14,15],[16,17,18]]  def ge_d():     return 'd' in (y for x in lis for y in x) def lc_d():     return 'd' in [y for x in lis for y in x]  def ge_11():     return 11 in (y for x in lis for y in x) def lc_11():     return 11 in [y for x in lis for y in x]  def ge_18():     return 18 in (y for x in lis for y in x) def lc_18():     return 18 in [y for x in lis for y in x]  for i in xrange(100000):     ge_d()     lc_d()     ge_11()     lc_11()     ge_18()     lc_18() 

Here are the relevant results, reordered to make the patterns clearer.

         5400002 function calls in 2.830 seconds     Ordered by: standard name     ncalls  tottime  percall  cumtime  percall filename:lineno(function)    100000    0.158    0.000    0.251    0.000 fop.py:3(ge_d)    500000    0.092    0.000    0.092    0.000 fop.py:4(<genexpr>)    100000    0.285    0.000    0.285    0.000 fop.py:5(lc_d)     100000    0.356    0.000    0.634    0.000 fop.py:8(ge_11)   1800000    0.278    0.000    0.278    0.000 fop.py:9(<genexpr>)    100000    0.333    0.000    0.333    0.000 fop.py:10(lc_11)     100000    0.435    0.000    0.806    0.000 fop.py:13(ge_18)   2500000    0.371    0.000    0.371    0.000 fop.py:14(<genexpr>)    100000    0.344    0.000    0.344    0.000 fop.py:15(lc_18) 

Creating a generator expression is equivalent to creating a generator function and calling it. That accounts for one call to <genexpr>. Then, in the first case, next is called 4 times, until d is reached, for a total of 5 calls (times 100000 iterations = ncalls = 500000). In the second case, it is called 17 times, for a total of 18 calls; and in the third, 24 times, for a total of 25 calls.

The genex outperforms the list comprehension in the first case, but the extra calls to next account for most of the difference between the speed of the list comprehension and the speed of the generator expression in the second and third cases.

>>> .634 - .278 - .333 0.023 >>> .806 - .371 - .344 0.091 

I'm not sure what accounts for the remaining time; it seems that generator expressions would be a hair slower even without the additional function calls. I suppose this confirms inspectorG4dget's assertion that "creating a generator comprehension has more native overhead than does a list comprehension." But in any case, this shows pretty clearly that generator expressions are slower mostly because of calls to next.

I'll add that when short-circuiting doesn't help, list comprehensions are still faster, even for very large lists. For example:

>>> counter = itertools.count() >>> lol = [[counter.next(), counter.next(), counter.next()]             for _ in range(1000000)] >>> 2999999 in (i for sublist in lol for i in sublist) True >>> 3000000 in (i for sublist in lol for i in sublist) False >>> %timeit 2999999 in [i for sublist in lol for i in sublist] 1 loops, best of 3: 312 ms per loop >>> %timeit 2999999 in (i for sublist in lol for i in sublist) 1 loops, best of 3: 351 ms per loop >>> %timeit any([2999999 in sublist for sublist in lol]) 10 loops, best of 3: 161 ms per loop >>> %timeit any(2999999 in sublist for sublist in lol) 10 loops, best of 3: 163 ms per loop >>> %timeit for i in [2999999 in sublist for sublist in lol]: pass 1 loops, best of 3: 171 ms per loop >>> %timeit for i in (2999999 in sublist for sublist in lol): pass 1 loops, best of 3: 183 ms per loop 

As you can see, when short circuiting is irrelevant, list comprehensions are consistently faster even for a million-item-long list of lists. Obviously for actual uses of in at these scales, generators will be faster because of short-circuiting. But for other kinds of iterative tasks that are truly linear in the number of items, list comprehensions are pretty much always faster. This is especially true if you need to perform multiple tests on a list; you can iterate over an already-built list comprehension very quickly:

>>> incache = [2999999 in sublist for sublist in lol] >>> get_list = lambda: incache >>> get_gen = lambda: (2999999 in sublist for sublist in lol) >>> %timeit for i in get_list(): pass 100 loops, best of 3: 18.6 ms per loop >>> %timeit for i in get_gen(): pass 1 loops, best of 3: 187 ms per loop 

In this case, the list comprehension is an order of magnitude faster!

Of course, this only remains true until you run out of memory. Which brings me to my final point. There are two main reasons to use a generator: to take advantage of short circuiting, and to save memory. For very large seqences/iterables, generators are the obvious way to go, because they save memory. But if short-circuiting is not an option, you pretty much never choose generators over lists for speed. You chose them to save memory, and it's always a trade-off.

like image 166
senderle Avatar answered Sep 20 '22 13:09

senderle