For the purposes of estimating the maximum call depth a recursive method may achieve with a given amount of memory, what is the (approximate) formula for calculating the memory used before a stack overflow error is likely to occur?
Many have responded with "it depends", which is reasonable, so let's remove some of the variables by using a trivial but concrete example:
public static int sumOneToN(int n) { return n < 2 ? 1 : n + sumOneToN(n - 1); }
It is easy to show that running this in my Eclipse IDE explodes for n
just under 1000 (surprisingly low to me). Could this call depth limit have been estimated without executing it?
Edit: I can't help thinking that Eclipse has a fixed max call depth of 1000, because I got to 998
, but there's one for the main, and one for the initial call to the method, making 1000
in all. This is "too round" a number IMHO to be a coincidence. I'll investigate further. I have just Dux overhead the -Xss vm parameter; it's the maximum stack size, so Eclipse runner must have -Xss1000
set somewhere
The recursion depth limit in Python is by default 1000 . You can change it using sys. setrecursionlimit() function.
Python's default recursion limit is 1000, meaning that Python won't let a function call on itself more than 1000 times, which for most people is probably enough. The limit exists because allowing recursion to occur more than 1000 times doesn't exactly make for lightweight code.
The “maximum recursion depth exceeded in comparison” error is raised when you try to execute a function that exceeds Python's built in recursion limit. You can fix this error by rewriting your program to use an iterative approach or by increasing the recursion limit in Python.
There is no theoretical limit to recursion depth in C. The only limits are those of your implementation, generally limited stack space. (Note that the C standard doesn't actually require a stack-based implementation.
This is clearly JVM- and possibly also architecture-specific.
I've measured the following:
static int i = 0; public static void rec0() { i++; rec0(); } public static void main(String[] args) { ... try { i = 0; rec0(); } catch (StackOverflowError e) { System.out.println(i); } ... }
using
Java(TM) SE Runtime Environment (build 1.7.0_09-b05) Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode)
running on x86.
With a 20MB Java stack (-Xss20m
), the amortized cost fluctuated around 16-17 bytes per call. The lowest I've seen was 16.15 bytes/frame. I therefore conclude that the cost is 16 bytes and the rest is other (fixed) overhead.
A function that takes a single int
has basically the same cost, 16 bytes/frame.
Interestingly, a function that takes ten ints
requires 32 bytes/frame. I am not sure why the cost is so low.
The above results apply after the code's been JIT compiled. Prior to compilation the per-frame cost is much, much higher. I haven't yet figured out a way to estimate it reliably. However, this does mean that you have no hope of reliably predicting maximum recursion depth until you can reliably predict whether the recursive function has been JIT compiled.
All of this was tested with a ulimit
stack sizes of 128K and 8MB. The results were the same in both cases.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With