Many algorithm problem can be solved through both DFS and Dynamic Programming. Is there any direct or indirect connections between these two algorithms? Or if i thought up the subproblem of dp, how can i convert it into the recursive function in dfs?
DFS + memoization = DP
as a definition is applicable to many problems.
By definition of dynamic programming, it must have "optimal substructure". It means you can use a sub-solution to get a generalized solution.
In other words, simply saying that you'll express f(n) using f(n-1) or so. This recursive expression can be directly coded using depth-first search.
In order to take advantage of a pre-calculated sub-solution or sub-structure, you cache the sub-solution using memoization. That's all what DP is.
P.S.: of course you can use iterative loop method to fill the cache instead of DFS + memoization approach. But to answer your question, this can only make it harder to understand.
Dynamic Programming is one of way to increase algorithm efficiency, by storing it in memory, or one should say memoization. It can be combined with any sort of algorithm, it is especially useful for brute force kind of algorithm in example dfs.
One of the simple example is fibonacci. I assume you already know solving fibonacci with recursive (dfs).
With dp, you will have no need to calculate for the same value anymore for you will store the value (usually in array).
Example Pseudocode:
//each arr default value will be 0
declare fibArr size n
fibArr[0] = 1
fibArr[1] = 1
function fib(int number)
{
//return the fibonacci number if it has been calculated beforehand
if(fibArr[number] != 0)
return fibArr[number]
fibArr[number] = fib(number-1) + fib(number-2)
return fibArr[number]
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With