Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Memory-constrained coin changing for numbers up to one billion

Tags:

I faced this problem on one training. Namely we have given N different values (N<= 100). Let's name this array A[N], for this array A we are sure that we have 1 in the array and A[i] ≤ 109. Secondly we have given number S where S ≤ 109.

Now we have to solve classic coin problem with this values. Actually we need to find minimum number of element which will sum to exactly S. Every element from A can be used infinite number of times.

  • Time limit: 1 sec

  • Memory limit: 256 MB

Example:

S = 1000, N = 10  A[] = {1,12,123,4,5,678,7,8,9,10}. The result is 10.  1000 = 678 + 123 + 123 + 12 + 12 + 12 + 12 + 12 + 12 + 4 

What I have tried

I tried to solve this with classic dynamic programming coin problem technique but it uses too much memory and it gives memory limit exceeded.

I can't figure out what should we keep about those values. Thanks in advance.

Here are the couple test cases that cannot be solved with the classic dp coin problem.

S = 1000000000 N = 100  1 373241370 973754081 826685384 491500595 765099032 823328348 462385937  251930295 819055757 641895809 106173894 898709067 513260292 548326059  741996520 959257789 328409680 411542100 329874568 352458265 609729300  389721366 313699758 383922849 104342783 224127933 99215674 37629322  230018005 33875545 767937253 763298440 781853694 420819727 794366283  178777428 881069368 595934934 321543015 27436140 280556657 851680043  318369090 364177373 431592761 487380596 428235724 134037293 372264778  267891476 218390453 550035096 220099490 71718497 860530411 175542466  548997466 884701071 774620807 118472853 432325205 795739616 266609698  242622150 433332316 150791955 691702017 803277687 323953978 521256141  174108096 412366100 813501388 642963957 415051728 740653706 68239387  982329783 619220557 861659596 303476058 85512863 72420422 645130771  228736228 367259743 400311288 105258339 628254036 495010223 40223395  110232856 856929227 25543992 957121494 359385967 533951841 449476607  134830774 OUTPUT FOR THIS TEST CASE: 5  S = 999865497 N = 7  1 267062069 637323855 219276511 404376890 528753603 199747292 OUTPUT FOR THIS TEST CASE: 1129042  S = 1000000000 N = 40  1 12 123 4 5 678 7 8 9 10 400 25 23 1000 67 98 33 46 79 896 11 112 1223 412  532 6781 17 18 19 170 1400 925 723 11000 607 983 313 486 739 896 OUTPUT FOR THIS TEST CASE: 90910 
like image 581
someone12321 Avatar asked Jul 12 '17 14:07

someone12321


2 Answers

(NOTE: Updated and edited for clarity. Complexity Analysis added at the end.)

OK, here is my solution, including my fixes to the performance issues found by @PeterdeRivaz. I have tested this against all of the test cases provided in the question and the comments and it finishes all in under a second (well, 1.5s in one case), using primarily only the memory for the partial results cache (I'd guess about 16MB).

Rather than using the traditional DP solution (which is both too slow and requires too much memory), I use a Depth-First, Greedy-First combinatorial search with pruning using current best results. I was surprised (very) that this works as well as it does, but I still suspect that you could construct test sets that would take a worst-case exponential amount of time.

First there is a master function that is the only thing that calling code needs to call. It handles all of the setup and initialization and calls everything else. (all code is C#)

// Find the min# of coins for a specified sum int CountChange(int targetSum, int[] coins) {     // init the cache for (partial) memoization     PrevResultCache = new PartialResult[1048576];      // make sure the coins are sorted lowest to highest     Array.Sort(coins);      int curBest = targetSum;     int result = CountChange_r(targetSum, coins, coins.GetLength(0)-1, 0, ref curBest);      return result; } 

Because of the problem test-cases raised by @PeterdeRivaz I have also added a partial results cache to handle when there are large numbers in N[] that are close together.

Here is the code for the cache:

    // implement a very simple cache for previous results of remainder counts     struct PartialResult     {         public int PartialSum;         public int CoinVal;         public int RemainingCount;     }     PartialResult[] PrevResultCache;      // checks the partial count cache for already calculated results     int PrevAddlCount(int currSum, int currCoinVal)     {         int cacheAddr = currSum & 1048575;  // AND with (2^20-1) to get only the first 20 bits         PartialResult prev = PrevResultCache[cacheAddr];          // use it, as long as it's actually the same partial sum          // and the coin value is at least as large as the current coin         if ((prev.PartialSum == currSum) && (prev.CoinVal >= currCoinVal))         {             return prev.RemainingCount;         }         // otherwise flag as empty         return 0;     }      // add or overwrite a new value to the cache     void AddPartialCount(int currSum, int currCoinVal, int remainingCount)     {         int cacheAddr = currSum & 1048575;  // AND with (2^20-1) to get only the first 20 bits         PartialResult prev = PrevResultCache[cacheAddr];          // only add if the Sum is different or the result is better         if ((prev.PartialSum != currSum)             || (prev.CoinVal <= currCoinVal)             || (prev.RemainingCount == 0)             || (prev.RemainingCount >= remainingCount)             )         {             prev.PartialSum = currSum;             prev.CoinVal = currCoinVal;             prev.RemainingCount = remainingCount;             PrevResultCache[cacheAddr] = prev;         }     } 

And here is the code for the recursive function that does the actual counting:

/* * Find the minimum number of coins required totaling to a specifuc sum * using a list of coin denominations passed. * * Memory Requirements: O(N)  where N is the number of coin denominations *                            (primarily for the stack) *  * CPU requirements:  O(Sqrt(S)*N) where S is the target Sum *                           (Average, estimated.  This is very hard to figure out.) */ int CountChange_r(int targetSum, int[] coins, int coinIdx, int curCount, ref int curBest) {     int coinVal = coins[coinIdx];     int newCount = 0;      // check to see if we are at the end of the search tree (curIdx=0, coinVal=1)     // or we have reached the targetSum     if ((coinVal == 1) || (targetSum == 0))     {         // just use math get the final total for this path/combination          newCount = curCount + targetSum;         // update, if we have a new curBest         if (newCount < curBest) curBest = newCount;         return newCount;     }      // prune this whole branch, if it cannot possibly improve the curBest     int bestPossible = curCount + (targetSum / coinVal);     if (bestPossible >= curBest)              return bestPossible; //NOTE: this is a false answer, but it shouldnt matter                                     //  because we should never use it.      // check the cache to see if a remainder-count for this partial sum     // already exists (and used coins at least as large as ours)     int prevRemCount = PrevAddlCount(targetSum, coinVal);     if (prevRemCount > 0)     {         // it exists, so use it         newCount = prevRemCount + targetSum;         // update, if we have a new curBest         if (newCount < curBest) curBest = newCount;         return newCount;     }      // always try the largest remaining coin first, starting with the      // maximum possible number of that coin (greedy-first searching)     newCount = curCount + targetSum;     for (int cnt = targetSum / coinVal; cnt >= 0; cnt--)     {         int tmpCount = CountChange_r(targetSum - (cnt * coinVal), coins, coinIdx - 1, curCount + cnt, ref curBest);          if (tmpCount < newCount) newCount = tmpCount;     }      // Add our new partial result to the cache     AddPartialCount(targetSum, coinVal, newCount - curCount);      return newCount; } 

Analysis:

Memory:

Memory usage is pretty easy to determine for this algorithm. Basiclly there's only the partial results cache and the stack. The cache is fixed at appx. 1 million entries times the size of each entry (3*4 bytes), so about 12MB. The stack is limited to O(N), so together, memory is clearly not a problem.

CPU:

The run-time complexity of this algorithm starts out hard to determine and then gets harder, so please excuse me because there's a lot of hand-waving here. I tried to search for an analysis of just the brute-force problem (combinatorial search of sums of N*kn base values summing to S) but not much turned up. What little there was tended to say it was O(N^S), which is clearly too high. I think that a fairer estimate is O(N^(S/N)) or possibly O(N^(S/AVG(N)) or even O(N^(S/(Gmean(N))) where Gmean(N) is the geometric mean of the elements of N[]. This solution starts out with the brute-force combinatorial search and then improves it with two significant optimizations.

The first is the pruning of branches based on estimates of the best possible results for that branch versus what the best result it has already found. If the best-case estimators were perfectly accurate and the work for branches was perfectly distributed, this would mean that if we find a result that is better than 90% of the other possible cases, then pruning would effectively eliminate 90% of the work from that point on. To make a long story short here, this should work out that the amount of work still remaining after pruning should shrink harmonically as it progress. Assuming that some kind of summing/integration should be applied to get a work total, this appears to me to work out to a logarithm of the original work. So let's call it O(Log(N^(S/N)), or O(N*Log(S/N)) which is pretty darn good. (Though O(N*Log(S/Gmean(N))) is probably more accurate).

However, there are two obvious holes with this. First, it is true that the best-case estimators are not perfectly accurate and thus they will not prune as effectively as assumed above, but, this is somewhat counter-balanced by the Greedy-First ordering of the branches which gives the best chances for finding better solutions early in the search which increase the effectiveness of pruning.

The second problem is that the best-case estimator works better when the different values of N are far apart. Specifically, if |(S/n2 - S/n1)| > 1 for any 2 values in N, then it becomes almost perfectly effective. For values of N less than SQRT(S), then even two adjacent values (k, k+1) are far enough apart that that this rule applies. However for increasing values above SQRT(S) a window opens up so that any number of N-values within that window will not be able to effectively prune each other. The size of this window is approximately K/SQRT(S). So if S=10^9, when K is around 10^6 this window will be almost 30 numbers wide. This means that N[] could contain 1 plus every number from 1000001 to 1000029 and the pruning optimization would provide almost no benefit.

To address this, I added the partial results cache which allows memoization of the most recent partial sums up to the target S. This takes advantage of the fact that when the N-values are close together, they will tend to have an extremely high number of duplicates in their sums. As best as I can figure, this effectiveness is approximately the N times the J-th root of the problem size where J = S/K and K is some measure of the average size of the N-values (Gmean(N) is probably the best estimate). If we apply this to the brute-force combinatorial search, assuming that pruning is ineffective, we get O((N^(S/Gmean(N)))^(1/Gmean(N))), which I think is also O(N^(S/(Gmean(N)^2))).

So, at this point take your pick. I know this is really sketchy, and even if it is correct, it is still very sensitive to the distribution of the N-values, so lots of variance.

like image 200
RBarryYoung Avatar answered Nov 02 '22 05:11

RBarryYoung


[I've replaced the previous idea about bit operations because it seems to be too time consuming]

A bit crazy idea and incomplete but may work.

Let's start with introducing f(n,s) which returns number of combinations in which s can be composed from n coins.

Now, how f(n+1,s) is related to f(n)?

One of possible ways to calculate it is:

f(n+1,s)=sum[coin:coins]f(n,s-coin)

For example, if we have coins 1 and 3,

f(0,)=[1,0,0,0,0,0,0,0] - with zero coins we can have only zero sum

f(1,)=[0,1,0,1,0,0,0,0] - what we can have with one coin

f(2,)=[0,0,1,0,2,0,1,0] - what we can have with two coins

We can rewrite it a bit differently:

f(n+1,s)=sum[i=0..max]f(n,s-i)*a(i)

a(i)=1 if we have coin i and 0 otherwise

What we have here is convolution: f(n+1,)=conv(f(n,),a)

https://en.wikipedia.org/wiki/Convolution

Computing it as definition suggests gives O(n^2)

But we can use Fourier transform to reduce it to O(n*log n).

https://en.wikipedia.org/wiki/Convolution#Convolution_theorem

So now we have more-or-less cheap way to find out what numbers are possible with n coins without going incrementally - just calculate n-th power of F(a) and apply inverse Fourier transform.

This allows us to make a kind of binary search which can help handling cases when the answer is big.

As I said the idea is incomplete - for now I have no idea how to combine bit representation with Fourier transforms (to satisfy memory constraint) and whether we will fit into 1 second on any "regular" CPU...

like image 42
maxim1000 Avatar answered Nov 02 '22 04:11

maxim1000