Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Balancing the stones

You have a number of stones with known weights w1, …, wn. Write a program that will rearrange the stones into two piles such that weight difference between two piles was minimal. I have dp algorithm:

int max(int a, int b){
    return a >= b ? a : b;
}

int diff(int* weights, int number_elements, int capacity){
    int **ptrarray = new int* [capacity + 1]; 

    for (int count = 0; count <= capacity; count++) {
        ptrarray[count] = new int [number_elements + 1];
    }

    for (int i = 0; i <= number_elements; ++i){
        ptrarray[0][i] = 0;
    }

    for (int i = 0; i <= capacity; ++i){
        ptrarray[i][0] = 0;
    }

    for (int i = 1; i <= number_elements; ++i){
        for (int j = 1; j <= capacity; ++j){
            if(weights[i - 1] <= j){
                ptrarray[j][i] = max(ptrarray[j][i - 1], ptrarray[j -    weights[i - 1]][i-1] + weights[i - 1]); 
            } else{
                ptrarray[j][i] = ptrarray[j][i - 1];
            }
        }
    }

    return ptrarray[capacity][number_elements];

}




int main(){ 
    int capacity;
    int number_elements;

    cin >> number_elements;

    int* weights = new int[number_elements];
    int sum = 0;
    int first_half;

    for (int i = 0; i < number_elements; ++i){
        cin >> weights[i];
        sum+=weights[i];
    }

    first_half = sum / 2;
    int after;

    after = diff(weights, number_elements, first_half);
    cout << sum - 2*after;
    return 0;
}

But it's a little bit naive. It demand too much memory, and I need some hints to simplify it. Is there a more effective approach?

like image 850
Evgeniy Avatar asked Oct 30 '22 11:10

Evgeniy


1 Answers

You can reduce the memory usage by making the following observations:

  1. Your code uses only at most two layers of the ptrarray array at any time.

  2. If you iterate from the largest to the smallest index in each layer, you can rewrite the previous layer. This way you'll need only one array.

Here is a pseudo code with this optimization:

max_weight = new int[max_capacity + 1](false)
max_weight[0] = true
for weight in weights:
     for capacity in [max_capacity ... weight]:
          max_weight[capacity] = max(max_weight[capacity], max_weight[capacity - weight] + weight

It requires O(max_capacity) memory (instead of O(max_capacity * number_of_items)).

A couple of more optimizations: you can use a boolean array (to indicate whether the sum i is reachable) and choose the largest reachable sum in the end instead of storing the largest sum less than or equal to i. Moreover, you can use an std::bitset instead of a boolean array to get an O(max_capacity * num_items / world_len) time complexity (where world_len is the size of the largest integer type that the machine can perform logical operations on). Adding one weight would look like reachable |= (reachable << weight).

So the final version looks like this:

reachable = bitset(max_capacity + 1)
reachable[0] = true
for weight in weights:
    reachable |= reachable << weight
return highest set bit of reachable

The code becomes much simpler and more efficient this way (the time complexity is technically the same, but it's much faster in practice).

There's one caveat here: you need to know the size of std::bitset at compile time, so if it's not possible, you'll need a different bitset implementation.

like image 172
kraskevich Avatar answered Nov 11 '22 16:11

kraskevich