I am trying to write an optimized fibonacci as an assignment to be able to compute fib(300) and fib(8000). Here is what I have so far (map is a HashMap).
public static BigInteger fibMemo(int n) {
if (n <= 1){
return BigInteger.valueOf(n);
}
if (!map.containsKey(n)){
map.put(n, fibMemo(n-1).add(fibMemo(n-2)));
}
return map.get(n);
}
When I call
System.out.println("300: " + FibonacciMemo.fibMemo(300));
on its own, it works just fine. Also,
System.out.println("8000: " + FibonacciMemo.fibMemo(8000));
works fine if I comment out the previous call to fib(300). But if I keep both calls I get a stack overflow on the recursive fibMemo. This seems very strange to me. Could someone please clarify the situation? Thanks in advance.
Here is the code:
import java.util.HashMap; // Import Java's HashMap so we can use it
import java.math.BigInteger;
public class FibonacciMemo {
private static HashMap<Integer, BigInteger> map = new HashMap<Integer, BigInteger>();
/**
* The classic recursive implementation with no memoization. Don't care
* about graceful error catching, we're only interested in performance.
*
* @param n
* @return The nth fibonacci number
*/
public static int fibNoMemo(int n) {
if (n <= 1) {
return n;
}
return fibNoMemo(n - 2) + fibNoMemo(n - 1);
}
/**
* Your optimized recursive implementation with memoization.
* You may assume that n is non-negative.
*
* @param n
* @return The nth fibonacci number
*/
public static BigInteger fibMemo(int n) {
// YOUR CODE HERE
if (n <= 1){
return BigInteger.valueOf(n);
}
if (!map.containsKey(n)){
map.put(n, fibMemo(n-1).add(fibMemo(n-2)));
}
return map.get(n);
}
public static void main(String[] args) {
// Optional testing here
String m = "Fibonacci's real name was Leonardo Pisano Bigollo.";
m += "\n" + "He was the son of a wealthy merchant.\n";
System.out.println(m);
System.out.println("300: " + FibonacciMemo.fibMemo(300));
System.out.println("8000: " + FibonacciMemo.fibMemo(8000));
// 46th Fibonacci = 1,836,311,903
// 47th Fibonacci = 2,971,215,073
}
}
There are two problems with your code. The obvious one is the stack consumption. The memoization does reduce the time complexity from exponential to linear, but still, the method has a linear stack consumption - for input value of 8000, it allocates 8000 stack frames.
As stated in the docs, the default stack size per thread is 320kB, which suffices for roughly 1000 - 2000 frames, which is not enough. You could increase the stack size using the -Xss JVM switch, but that is still not bullet proof. You should use an iterative implementation instead.
The second problem is that your static cache is never cleared, which basically causes a memory leak. You could either wrap the recursive method in another one which clears the hashmap after the recursion terminates, but that throws away a bit of the performance, because the results of one invocation cannot be reused in the following ones.
A more performant solution would be to use a proper cache implementation that doesn't require manual cleaning, but handles size limits and garbage collection on its own. Guava provides such implementation.
It appears that your recursive algorithm is just too much for Java's default stack size. Stack memory is optimized differently than heap in hardware, and you shouldn't use algorithms with this much recursion anyway. Some languages can optimize tail recursion. It seems at least in this case Java won't always optimize your code.
So the best solution imo is just to rewrite your code to use loops instead.
private final static List<BigInteger> fibs = new ArrayList<>();
static{ fibs.add( BigInteger.ZERO ); fibs.add( BigInteger.ONE ); }
public static BigInteger lFib( int n ) {
if( n < 0 ) throw new IllegalArgumentException();
if( n >= fibs.size() ) {
for( int i = fibs.size(); i <= n; i++ )
fibs.add( fibs.get(i-2).add( fibs.get(i-1) ) );
}
return fibs.get(n);
}
Very lightly tested.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With