I decided to try a few experiments to see what I could discover about the size of stack frames, and how far through the stack the currently executing code was. There are two interesting questions we might investigate here:
StackOverflowError
?Here's the best I could come up with for this:
public static int levelsDeep() { try { throw new SomeKindOfException(); } catch (SomeKindOfException e) { return e.getStackTrace().length; } }
This seems a bit hacky. It generates and catches an exception, and then looks to see what the length of the stack trace is.
Unfortunately it also seems to have a fatal limitation, which is that the maximum length of the stack trace returned is 1024. Anything beyond that gets axed, so the maximum that this method can return is 1024.
Question:
Is there a better way of doing this that isn't so hacky and doesn't have this limitation?
For what it's worth, my guess is that there isn't: Throwable.getStackTraceDepth()
is a native call, which suggests (but doesn't prove) that it can't be done in pure Java.
The number of levels we can reach will be determined by (a) size of stack frame, and (b) amount of stack left. Let's not worry about size of stack frame, and just see how many levels we can reach before we hit a StackOverflowError
.
Here's my code for doing this:
public static int stackLeft() { try { return 1+stackLeft(); } catch (StackOverflowError e) { return 0; } }
It does its job admirably, even if it's linear in the amount of stack remaining. But here is the very, very weird part. On 64-bit Java 7 (OpenJDK 1.7.0_65), the results are perfectly consistent: 9,923, on my machine (Ubuntu 14.04 64-bit). But Oracle's Java 8 (1.8.0_25) gives me non-deterministic results: I get a recorded depth of anywhere between about 18,500 and 20,700.
Now why on earth would it be non-deterministic? There's supposed to be a fixed stack size, isn't there? And all of the code looks deterministic to me.
I wondered whether it was something weird with the error trapping, so I tried this instead:
public static long badSum(int n) { if (n==0) return 0; else return 1+badSum(n-1); }
Clearly this will either return the input it was given, or overflow.
Again, the results I get are non-deterministic on Java 8. If I call badSum(14500)
, it will give me a StackOverflowError
about half the time, and return 14500 the other half. but on Java 7 OpenJDK, it's consistent: badSum(9160)
completes fine, and badSum(9161)
overflows.
Question:
Why is the maximum recursion depth non-deterministic on Oracle's Java 8? And why is it deterministic on OpenJDK 7?
The “maximum recursion depth exceeded in comparison” error is raised when you try to execute a function that exceeds Python's built in recursion limit. You can fix this error by rewriting your program to use an iterative approach or by increasing the recursion limit in Python.
You can change the limit by calling sys. setrecursionlimit() method. Consider this a dangerous action! If possible, instead of tweaking the recursion limit, try to implement your algorithm iteratively to avoid deep recursion.
There is no theoretical limit to recursion depth in C. The only limits are those of your implementation, generally limited stack space. (Note that the C standard doesn't actually require a stack-based implementation.
A Python RecursionError exception is raised when the execution of your program exceeds the recursion limit of the Python interpreter. Two ways to address this exception are increasing the Python recursion limit or refactoring your code using iteration instead of recursion.
The observed behavior is affected by the HotSpot optimizer, however it is not the only cause. When I run the following code
public static void main(String[] argv) { System.out.println(System.getProperty("java.version")); System.out.println(countDepth()); System.out.println(countDepth()); System.out.println(countDepth()); System.out.println(countDepth()); System.out.println(countDepth()); System.out.println(countDepth()); System.out.println(countDepth()); } static int countDepth() { try { return 1+countDepth(); } catch(StackOverflowError err) { return 0; } }
with JIT enabled, I get results like:
> f:\Software\jdk1.8.0_40beta02\bin\java -Xss68k -server -cp build\classes X 1.8.0_40-ea 2097 4195 4195 4195 12587 12587 12587 > f:\Software\jdk1.8.0_40beta02\bin\java -Xss68k -server -cp build\classes X 1.8.0_40-ea 2095 4193 4193 4193 12579 12579 12579 > f:\Software\jdk1.8.0_40beta02\bin\java -Xss68k -server -cp build\classes X 1.8.0_40-ea 2087 4177 4177 12529 12529 12529 12529
Here, the effect of the JIT is clearly visible, obviously the optimized code needs less stack space, and it’s shown that tiered compilation is enabled (indeed, using -XX:-TieredCompilation
shows a single jump if the program runs long enough).
In contrast, with disabled JIT I get the following results:
> f:\Software\jdk1.8.0_40beta02\bin\java -Xss68k -server -Xint -cp build\classes X 1.8.0_40-ea 2104 2104 2104 2104 2104 2104 2104 > f:\Software\jdk1.8.0_40beta02\bin\java -Xss68k -server -Xint -cp build\classes X 1.8.0_40-ea 2076 2076 2076 2076 2076 2076 2076 > f:\Software\jdk1.8.0_40beta02\bin\java -Xss68k -server -Xint -cp build\classes X 1.8.0_40-ea 2105 2105 2105 2105 2105 2105 2105
The values still vary, but not within the single runtime thread and with a lesser magnitude.
So, there is a (rather small) difference that becomes much larger if the optimizer can reduce the stack space required per method invocation, e.g. due to inlining.
What can cause such a difference? I don’t know how this JVM does it but one scenario could be that the way a stack limit is enforced requires a certain alignment of the stack end address (e.g. matching memory page sizes) while the memory allocation returns memory with a start address that has a weaker alignment guaranty. Combine such a scenario with ASLR and there might be always a difference, within the size of the alignment requirement.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With