Often people say that it's not recommended to use recursive functions in python (recursion depth restrictions, memory consumption, etc)
I took a permutation example from this question.
def all_perms(str):
if len(str) <=1:
yield str
else:
for perm in all_perms(str[1:]):
for i in range(len(perm)+1):
yield perm[:i] + str[0:1] + perm[i:]
Afterwards I transformed it into a not recursive version (I'm a python newbie)
def not_recursive(string):
perm = [string[0]]
for e in string[1:]:
perm_next = []
for p in perm:
perm_next.extend(p[:i] + e + p[i:] for i in range(len(p) + 1))
perm = perm_next
for p in perm:
yield p
And I compared them
before=time()
print len([p for p in all_perms("1234567890")])
print "time of all_perms %i " % (time()-before)
before=time()
print len([p for p in not_recursive("1234567890")])
print "time of not_recursive %i " % (time()-before)
before=time()
print len([p for p in itertools.permutations("1234567890")])
print "time of itertools.permutations %i " % (time()-before)
The results are quite interesting. The recursive function is the fastest one 5 sec, then not recursive 8 sec, then buildin 35 sec.
So are recursive functions that bad in Python? What is wrong with build-in itertools.permutations ?
Recursion is "bad" in Python because it is usually slower than an iterative solution, and because Python's stack depth is not unlimited (and there's no tail call optimization). For a sum function, yes, you probably want unlimited depth since it would be perfectly reasonable to want to sum a list of a million numbers, and the performance delta will become an issue with such a large number of items. In that case you should not use recursion.
If you are walking a DOM tree read from an XML file, on the other hand, you are unlikely to exceed Python's recursion depth (1000 by default). It certainly could, but as a practical matter, it probably won't. When you know what kinds of data you'll be working with ahead of time, you can be confident you won't overflow the stack.
A recursive tree walk is, in my opinion, much more natural to write and read than an iterative one, and the recursion overhead is generally a small part of the running time. If it really matters to you that it takes 16 seconds instead of 14, throwing PyPy at it might be a better use of your time.
Recursion seems a natural fit for the problem you posted and if you think the code is easier to read and maintain that way, and the performance is adequate, then go for it.
I grew up writing code on computers that, as a practical matter, limited recursion depth to about 16, if it was provided at all, so 1000 seems luxurious to me. :-)
Recursion is good for problems that lend themselves to clean, clear, recursive implementations. But like all programming you must perform some algorithm analysis to understand the performance characteristics. In the case of recursion, besides number of operations you must also estimate the maximum stack depth.
Most problems occur when new programmers assume recursion is magical and don't realize there is a stack underneath making it possible. New programmers have also been known to allocate memory and never free it, and other mistakes, so recursion is not unique in this danger.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With