When playing around, I often write simple recursive functions looking something like:
def f(a,b):
if a>=0 and b>=0:
return min( f(a-1,b) , f(b,a-1) ) # + some cost that depends on a,b
else:
return 0
(For example, when computing weighted edit distances, or evaluating recursively defined mathematical formulas.)
I then use a memoizing decorator to cache the results automatically.
When I try something like f(200,10) I get:
RuntimeError: maximum recursion depth exceeded
This is as expected because the recursive implementation exhausts Python's stack space/ recursion limits.
I usually work around this problem by one of:
but I find all of these quite error prone.
Is there a way to write an @Bigstack decorator that would simulate the effect of having a really big stack?
Note that my functions normally make several recursive function calls so this is not the same as tail recursion - I really do want to save all the internal state of each function on the stack.
I've been thinking about using a list of generator expressions as my stack. By probing the stackframe I could work out when the function has been called recursively and then trigger an exception to return to the decorator code. However, I can't work out a way of gluing these ideas together to make anything that actually works.
Alternatively, I could try accessing the abstract syntax tree for the function and try transforming calls to recursive functions to yield statements, but this seems like it's heading in the wrong direction.
Any suggestions?
EDIT
It certainly looks like I am misusing Python, but another approach I have been considering is to use a different thread for each block of, say, 500 stack frames and then insert queues between each consecutive pair of threads - one queue for arguments, and another queue for return values. (Each queue will have at most one entry in it.) I think this probably doesn't work for some reason - but I'll probably only work out why after I've tried to implement it.
A decorator is a function that takes another function and extends the behavior of the latter function without explicitly modifying it. Python allows "nested" functions ie (a function within another function). Python also allows you to return functions from other functions.
What is the maximum depth of recursion function in Python? The Python interpreter limits the depths of recursion to help avoid infinite recursions, resulting in stack overflows. By default, the maximum depth of recursion is 1000 .
The maximum depth of recursion refers to the number of levels of activation of a procedure which exist during the deepest call of the procedure.
Needless to say, Python's decorators are incredibly useful. Not only can they be used to slow down the time it takes to write some code, but they can also be incredibly helpful at speeding up code. Not only are decorators incredibly useful when you find them about, but it is also a great idea to write your own.
To get around the recursion limit, you can catch the RuntimeError
exception to detect when you've run out of stack space, and then return a continuation-ish function that, when called, restarts the recursion at the level where you ran out of space. Call this (and its return value, and so on) until you get a value, then try again from the top. Once you've memoized the lower levels, the higher levels won't run into a recursion limit, so eventually this will work. Put the repeated-calling-until-it-works in a wrapper function. Basically it's a lazy version of your warming-up-the-cache idea.
Here's an example with a simple recursive "add numbers from 1 to n inclusive" function.
import functools
def memoize(func):
cache = {}
@functools.wraps(func)
def wrapper(*args, **kwargs):
key = args, tuple(sorted(kwargs.items()))
if key in cache:
return cache[key]
else:
result = func(*args, **kwargs)
if not callable(result):
cache[key] = result
return result
return wrapper
@memoize
def _addup(n):
if n < 2:
return n
else:
try:
result = _addup(n - 1)
except RuntimeError:
return lambda: _addup(n)
else:
return result if callable(result) else result + n
def addup(n):
result = _addup(n)
while callable(result):
while callable(result):
result = result()
result = _addup(n)
return result
assert addup(5000) == sum(xrange(5001))
Rather than returning the lambda function all the way back up the call chain, we can raise an exception to short-circuit that, which both improves performance and simplifies the code:
# memoize function as above, or you can probably use functools.lru_cache
class UnwindStack(Exception):
pass
@memoize
def _addup(n):
if n < 2:
return n
else:
try:
return _addup(n - 1) + n
except RuntimeError:
raise UnwindStack(lambda: _addup(n))
def _try(func, *args, **kwargs):
try:
return func(*args, **kwargs)
except UnwindStack as e:
return e[0]
def addup(n):
result = _try(_addup, n)
while callable(result):
while callable(result):
result = _try(result)
result = _try(_addup, n)
return result
This remains pretty inelegant, though, and still has a fair amount of overhead, and I can't imagine how you'd make a decorator out it. Python isn't really suited to this kind of thing, I guess.
Here's an implementation that uses a list of generator expressions as the stack:
def run_stackless(frame):
stack, return_stack = [(False, frame)], []
while stack:
active, frame = stack.pop()
action, res = frame.send(return_stack.pop() if active else None)
if action == 'call':
stack.extend([(True, frame), (False, res)])
elif action == 'tail':
stack.append((False, res))
elif action == 'return':
return_stack.append(res)
else:
raise ValueError('Unknown action', action)
return return_stack.pop()
To use it you need to transform the recursive function according to the following rules:
return expr -> yield 'return', expr
recursive_call(args...) -> (yield 'call', recursive_call(args...))
return recursive_call(args...) -> yield 'tail', recursive_call(args...)
For example, with the cost function as a * b
, your function becomes:
def f(a,b):
if a>=0 and b>=0:
yield 'return', min((yield 'call', f(a-1,b)),
(yield 'call', f(b,a-1))) + (a * b)
else:
yield 'return', 0
Testing:
In [140]: run_stackless(g(30, 4))
Out[140]: 410
In Python 2.6.2 it appears to offer a ~8-10x performance hit compared to direct calls.
The tail
action is for tail recursion:
def factorial(n):
acc = [1]
def fact(n):
if n == 0:
yield 'return', 0
else:
acc[0] *= n
yield 'tail', fact(n - 1)
run_stackless(fact(n))
return acc[0]
The transformation to generator-recursive style is fairly easy, and could probably be done as a bytecode hack.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With