I'm going through some algorithms, and came across the coin change problem.
When thinking about the problem I came up with this naive recursive solution:
int coinChange(const vector<int>& coins, int start, int n) {
if (n == 0) return 1;
if (n < 0) return 0;
int total = 0;
for (int i = start; i < coins.size(); ++i) {
if (coins[i] <= n) total += coinChange(coins, i, n-coins[i]);
}
return total;
}
I then realized the "accepted" solution was as follows:
int count( int S[], int m, int n )
{
// If n is 0 then there is 1 solution (do not include any coin)
if (n == 0)
return 1;
// If n is less than 0 then no solution exists
if (n < 0)
return 0;
// If there are no coins and n is greater than 0, then no solution exist
if (m <=0 && n >= 1)
return 0;
// count is sum of solutions (i) including S[m-1] (ii) excluding S[m-1]
return count( S, m - 1, n ) + count( S, m, n-S[m-1] );
}
At first I thought the two were essentially the same. It was clear to me that my recursion tree was much wider, but it seemed that this was only because my algorithm was doing more work at each level so it evened out. It looks like both algorithms are considering the number of ways to make changes with the current coin (given it is <= the current sum), and considering the number of ways to make change without the current coin (thus with all the elements in the coin array minus the current coin). Therefore the parameter start
in my algorithm was doing essentially the same thing as m
is doing in the second algorithm.
The more I look at it though, it seems that regardless of the previous text, my algorithm is O(n^n)
and the second one is O(2^n)
. I've been looking at this for too long but if someone could explain what extra work my algorithm is doing compared to the second one that would be great.
I understand the dynamic programming solution to this problem, this question is purely a complexity based question.
The time complexity of the coin change problem is (in any case) (n*c), and the space complexity is (n*c) (n).
Time complexity of recursion depends on two factors: 1) Total number of recursive calls and 2) Time complexity of additional operations for each recursive call. So recursion tree is a diagram representing additional cost for each recursive call in terms of their input size.
Recursion can reduce time complexity. An example of this is calculating fibonacci numbers. If you calculate the fibonacci sequence up to a number n using recursion rather than iteration, the time to complete the task when compared to that of the iterative approach was much greater.
Time complexity of the greedy coin change algorithm will be: For sorting n coins O(nlogn). While loop, the worst case is O(total).
The two pieces of code are the same except that the second uses recursion instead of a for loop to iterate over the coins. That makes their runtime complexity the same (although the second piece of code probably has worse memory complexity because of the extra recursive calls, but that may get lost in the wash).
For example, here's partial evaluation of the second count
in the case where S = [1, 5, 10] and m=3. On each line, I expand the left-most definition of count
.
count(S, 3, 100)
= count(S, 2, 100) + count(S, 3, 90)
= count(S, 1, 100) + count(S, 2, 95) + count(S, 3, 90)
= count(S, 0, 100) + count(S, 1, 99) + count(S, 2, 95) + count(S, 3, 90)
= 0 + count(S, 1, 99) + count(S, 2, 95) + count(S, 3, 90)
You can see that this is the same calculation as your for-loop that sums up total
.
Both algorithms are terrible because they run in exponential time. Here is an answer (of mine) that uses a neat dynamic programming method that runs in O(nm) time and uses O(n) memory, and is extremely concise -- comparable in size to your naive recursive solution. https://stackoverflow.com/a/20743780/1400793 . It's in Python, but it's trivially convertable to C++.
You didn't read the whole article (?).
The idea behind dynamic programming is that you store some values you already got and that way you don't need to calculate them again. In the end of the article you can see the actual correct solution.
As for why your solution is n^n and their original one is 2^n. Both solutions actually are 2^(n+#coins). They just call the function with m-1, instead of having a cycle that goes trough every coin. While your solution tries every coin at the start and then less and less, theirs tries to take one coin of type m, then another, then another, until at some point it switches to type m-1 and does the same with it and so on. Basically both solutions are the same.
Another way to prove that they have the same complexity is like this:
Both solutions are correct, so they will reach all possible solutions, and both stop growing a particular branch of the recursion the moment it reaches a negative n. Therefore, they have the same complexity.
And if you are not convinced just try each solution except add some counter and increment it every time you enter the function. Do this for each solution and you will see that you get the same number.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With