Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

An algorithmic problem: find the minimum number of operations to reduce each number in an array to 0

Tags:

c++

algorithm

Recently I came across an algorithmic problem which is described below:

Given an array of positive integers, you can pick any consecutive interval within the array and subtract every number within that interval by the same value, ask how many times at least the above operation is needed to make all the numbers in the array 0.

I'm trying to use greedy algorithm but I'm running into local minima problem, so I realized that I have to use dynamic programming but I can't find the recursion formula, am I thinking correctly? If not what is the correct way of thinking? Can anyone give me a hint about this problem?

I have seriously thought about this problem but I just can't come up with a result and it's frustrating me

EDIT:

here is my solution: using greedy algorithm, every time choose to the lowest number in the interval, subtract every number within the interval by this number, do this recursivly until all numbers are zero.The C++ implementation is as follows:

#include <iostream>
#include <vector>

using namespace std;

int foo(std::vector<int> &v, int l, int r) {
    int min = 0x3f3f3f3f;
    vector<int> idx;
    for (int i = l; i <= r; i++) {
        min = std::min(min, v[i]);
    }
    for (int i = l; i <= r; i++) {
        v[i] -= min;
        if (v[i] == 0) {
            idx.push_back(i);
        }
    }
    cout << "l: " << l << "r: " << r << " delete: " << min << endl;
    if (idx.size() == r - l + 1) {
        return 1;
    }
    int res = 1;
    int tmp_l = l;
    for (int x : idx) {
        if (tmp_l < x) {
            res += foo(v, tmp_l, x - 1);
        }
        tmp_l = x + 1;
    }
    if (tmp_l <= r) {
        res += foo(v, tmp_l, r);
    }
    return res;
}

int main() {
    int n;
    cin >> n;
    vector<int> v;
    int tmp;
    for (int i = 0; i < n; i++) {
        cin >> tmp;
        v.push_back(tmp);
    }
    cout << foo(v, 0, v.size() - 1);
}

testcase formation is as follow.(first line describe the size of array, second line describe numbers in the array)

5
3 3 2 3 3

my solution meets local minium in the following testcase:

10
2 3 4 5 1 2 3 1 2 3

my solution gives answer 9 but the correct answer is 8. the optimial process is

origin:
    2 3 4 5 1 2 3 1 2 3

1 - 0 1 2 3 1 2 3 1 2 3
2 - 0 0 1 2 0 1 2 0 1 2
3 - 0 0 0 1 0 1 2 0 1 2
4 - 0 0 0 0 0 1 2 0 1 2
5 - 0 0 0 0 0 0 1 0 1 2
6 - 0 0 0 0 0 0 0 0 1 2
7 - 0 0 0 0 0 0 0 0 0 1
8 - 0 0 0 0 0 0 0 0 0 0

it seems that finding the lowerest number every time is not the best stragery, in this test case, number 2 is selected at first whereas the minium number in the interval is 1.

like image 510
yixing Avatar asked Dec 01 '25 01:12

yixing


1 Answers

This isn't an algorithm, but I do have a proof that it is NP-complete by a reduction from subset sum.

Let S = [s1, s2, ..., sn] be an array of n positive integers, and k be another positive integer. Consider the following numbers.

[s1, s1+s2, s1+s2+s3, ..., s1+s2+...+sn, k]

This is solvable in n steps if and only if k is the sum of some subset of the numbers from S.

I actually do know a way to solve this with A* search. But there is a combinatorial explosion. Here is an outline of the idea.

Given an optimal answer, we can rearrange the order of the terms so that we're always subtracting from the first non-zero term. And the heuristic is that there is some number m of places where the value in the list changes. We can only reduce m by at most 2 with a single subtraction (we can eliminate both edges potentially). So the ceiling of m/2 is the least number of steps left.


I was asked for evidence of the reduction.

The "can be done if k is a sum of a subset of S" part is simple. The blocks removed are of heights s1, s2, ..., sn. The ith block goes from index i-1 to either index n-1 or n depending on whether i is one of the terms in the subset sum.

The "only if" is more complicated.

First, to make indexing easy, let's replace S with an array. So instead of saying s1 we'll say S[0]. (Note, there will be lots of off by one indexing issues.) And therefore our example array with n+1 elements is

a = [a[0], a[1], ..., a[n]]
  = [S[0], S[0] + S[1], S[0] + S[1] + S[2], ..., S[0] + ... + S[n-1], k]

Now suppose we can zero out the whole array with n blocks. Using the fact that addition is commutative, let's rearrange the blocks by the first index removed. This guarantees that we zero out the numbers in the array from left to right. Let's let taken[i][j] be the amount taken from a[j] after the first i blocks are taken.

Note that taken[0][j] is always zero, because we have not yet subtracted out anything.

Suppose that a[j] is not yet zeroed out after i blocks. Every block so far must start at or before j, and possibly only some continue to j+1, so taken[i][j] >= taken[i][j+1]. Since a[0] < a[1] < .. < a[n-1], this means we can only zero out at most one per block. Since we manage to zero out all n of those with n blocks, we must zero out one of those indexes per block. (And, at some point, we'll manage to zero out a[n], which is k.)

Now our goal is to see that taken[i][j] must always be the sum of a subset of [S[0], S[1], ..., S[i-1]]. This will mean that taken[n][n] is. And since that zeroed out k, that means that k is the sum of a subset of S.

We will do this by showing something stronger.

If i <= j < l then taken[i][j] is the sum of a subset of [S[0], S[1], ..., S[i-1]], and taken[i][l] is the sum of a subset of that.

First, the trivial case. taken[0][j] is always 0 because when we have taken 0 blocks away, we haven't taken anything away. 0 is, of course, the sum of a subset of the empty set. And the empty set is a subset of the empty set.

We'll do i=1 by hand. We need to zero out a[0] = S[0]. Therefore the first block has height S[0] and some length, meaning that taken[1][j] has to be either S[0] or 0 depending on whether you're off the end of the block. And once it turns 0 it stays 0, so if j < k then taken[1][l] is the sum of a subset of taken[1][j].

And now for the general induction case. The ith block needs to finish zeroing out a[i-1] = S[0] + S[1] + ... + S[i-1]. We have already taken away taken[i-1][i-1] which is a sum of a subset of [S[0], S[1], ..., S[i-2]]. And therefore the ith block is the sum of the complement of that set in [[S[0], S[1], ..., S[i-1]].

If i <= j, then taken[i-1][j] is the sum of a subset of taken[i-1][i-1]. Therefore the elements of S in the ith block are NOT already in taken[i-1][j]. Therefore whether or not the ith block is included in taken[i][j], you get the sum of a subset of [S[0], S[1], ..., S[i-1]].

Now suppose that j < l. We already had taken[i-1][l] is the sum of a subset of the terms that made up taken[i-1][j]. The ith block may or may not be included in taken[i][j] and taken[i][l]. But if it is included in taken[i][l] then it MUST be included in taken[i][j]. This leads to three cases, in all of which taken[i][l] is a sum of the subset of the terms that are part of taken[i][j].

So by induction we have the result. And as already noted, that means that k = taken[n][n] is the sum of a subset of [S[0], S[1], ..., S[n-1]]. So solving this problem for the array a leads to a solution of the subset sum problem.

like image 167
btilly Avatar answered Dec 02 '25 13:12

btilly



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!