Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Higher Order Functions vs loops - running time & memory efficiency?

Does using Higher Order Functions & Lambdas make running time & memory efficiency better or worse? For example, to multiply all numbers in a list :

nums = [1,2,3,4,5]
prod = 1
for n in nums:
    prod*=n

vs

prod2 = reduce(lambda x,y:x*y , nums)

Does the HOF version have any advantage over the loop version other than it's lesser lines of code/uses a functional approach?

EDIT:

I am not able to add this as an answer as I don't have the required reputation. I tied to profile the loop & HOF approach using timeit as suggested by @DSM

def test1():         
    s= """
    nums = [a for a in range(1,1001)] 
    prod = 1 
    for n in nums:
        prod*=n
    """            
    t = timeit.Timer(stmt=s)
    return t.repeat(repeat=10,number=100)    

def test2():
    s="""
    nums = [a for a in range(1,1001)]     
    prod2 = reduce(lambda x,y:x*y , nums)
    """
    t = timeit.Timer(stmt=s)
    return t.repeat(repeat=10,number=100) 

And this is my result:

Loop:
[0.08340786340144211, 0.07211491653462579, 0.07162720686361926, 0.06593182661083438, 0.06399049758613146, 0.06605228229559557, 0.06419744588664211, 0.0671893658461038, 0.06477527090075941, 0.06418023793167627]
test1 average: 0.0644778902685
HOF:
[0.0759414223099324, 0.07616920129277016, 0.07570730355421262, 0.07604965128984942, 0.07547092059389193, 0.07544737286604364, 0.075532959799953, 0.0755039779810629, 0.07567424616704144, 0.07542563650187661]
test2 average: 0.0754917512762

On an average loop approach seems to be faster than using HOFs.

like image 890
Bharat Avatar asked Jan 28 '12 01:01

Bharat


2 Answers

Higher-order functions can be very fast.

For example, map(ord, somebigstring) is much faster than the equivalent list comprehension [ord(c) for c in somebigstring]. The former wins for three reasons:

  • map() pre-sizes the result string to the length of somebigstring. In contrast, the list-comprehension must make many calls to realloc() as it grows.

  • map() only has to do one lookup for ord, first checking globals, then checking and finding it in builtins. The list comprehension has to repeat this work on every iteration.

  • The inner loop for map runs at C speed. The loop body for the list comprehension is a series of pure Python steps that each need to be dispatched or handled by the eval-loop.

Here are some timings to confirm the prediction:

>>> from timeit import Timer
>>> print min(Timer('map(ord, s)', 's="x"*10000').repeat(7, 1000))
0.808364152908
>>> print min(Timer('[ord(c) for c in s]', 's="x"*10000').repeat(7, 1000))
1.2946639061
like image 160
Raymond Hettinger Avatar answered Nov 08 '22 13:11

Raymond Hettinger


from my experience loops can do things very fast , provided they are not nested too deeply , and with complex higher math operations , for simple operations and a Single layer of loops it can be as fast as any other way , maybe faster , so long as only integers are used as the index to the loop or loops, it would actually depend on what you are doing too

Also it might very well be that the higher order function will produce just as many loops as the loop program version and might even be a little slower , you would have to time them both...just to be sure.

like image 23
John Einem Avatar answered Nov 08 '22 13:11

John Einem