I began learning functional programming recently, and came up with this example when attempting to calculate my quiz average for a class.
The example I came up with is:
scores = [90, 91, 92, 94, 95, 96, 97, 99, 100]
def add(num1, num2):
'''returns the sum of the parameters'''
return num1 + num2
import operator
timeit reduce(add, scores) / len(scores) #--> 1000000 loops, best of 3: 799 ns per loop
timeit sum(scores) / len(scores) #--> 1000000 loops, best of 3: 207 ns per loop
timeit reduce(operator.add, scores) / len(scores) #--> 1000000 loops, best of 3: 485 ns per loop
It would seem that in the example above, using an higher order function is almost 4 times slower.
So my questions is, when would be a good time to use an higher-order function, because clearly the example above it is not?
reduce() returns the value that is returned from the callback function on the final iteration of the array. reduce() is a central concept in functional programming, where it's not possible to mutate any value, so in order to accumulate all values in an array, one must return a new accumulator value on every iteration.
Python's reduce() is a function that implements a mathematical technique called folding or reduction. reduce() is useful when you need to apply a function to an iterable and reduce it to a single cumulative value.
'reduce' : returns a Series if possible rather than expanding list-like results. This is the opposite of 'expand'. 'broadcast' : results will be broadcast to the original shape of the DataFrame, the original index and columns will be retained.
reduce
and sum
do very different things. Consider a question like "I have a nested dictionary ...
d = {'foo': {'bar': {'baz': 'qux'}}}
and I would like to get the value associated with a list of keys: ['foo', 'bar', 'baz']
". This could call for a reduce
(if you're a functional programming kind of person):
>>> reduce(lambda subdict, k: subdict[k], ['foo', 'bar', 'baz'], d)
'qux'
Note, you can't do this with sum
. It just happens that summing is an easy example to show what is happening with reduce (since you can write it out with parenthesis and most programmers are familiar with how parenthesis group mathematical operations).
reduce()
makes sense when you require an arbitrary operation over a list of data, not when you already have a heavily optimized library function that will not only outperform reduce()
on small lists, but drastically outperform it on larger ones.
reduce()
gives you the flexibility to create arbitrary folds, but that flexibility comes at the cost of some performance overhead, especially in a language where most basic functional constructs are considered slightly outside the mainstream.
Python is "functional" in that it has first-class functions, but it is not primarily a functional language. It provides a lush supply of iterators for use in loops and has all sorts of language features that make explicit loops easy to write, but is not focused around recursively defined list operations (though it does permit them to a limited degree -- lack of TCO prevents me from, say, paraphrasing my Erlang or Guile code directly in Python, but does give me the flexibility to do things like benchmark competing approaches that adhere to similar interfaces).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With