Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Optimizing cartesian requests with affine costs

I have a cost optimization request that I don't know how if there is literature on. It is a bit hard to explain, so I apologize in advance for the length of the question.

There is a server I am accessing that works this way:

  • a request is made on records (r1, ...rn) and fields (f1, ...fp)
  • you can only request the Cartesian product (r1, ..., rp) x (f1,...fp)
  • The cost (time and money) associated with a such a request is affine in the size of the request:

T((r1, ..., rn)x(f1, ..., fp) = a + b * n * p

Without loss of generality (just by normalizing), we can assume that b=1 so the cost is:

T((r1, ...,rn)x(f1,...fp)) = a + n * p

  • I need only to request a subset of pairs (r1, f(r1)), ... (rk, f(rk)), a request which comes from the users. My program acts as a middleman between the user and the server (which is external). I have a lot of requests like this that come in (tens of thousands a day).

Graphically, we can think of it as an n x p sparse matrix, for which I want to cover the nonzero values with a rectangular submatrix:

   r1 r2 r3 ... rp
   ------      ___
f1 |x  x|      |x|
f2 |x   |      ---
   ------
f3
..    ______
fn    |x  x|
      ------

Having:

  • the number of submatrices being kept reasonable because of the constant cost
  • all the 'x' must lie within a submatrix
  • the total area covered must not be too large because of the linear cost

I will name g the sparseness coefficient of my problem (number of needed pairs over total possible pairs, g = k / (n * p). I know the coefficient a.

There are some obvious observations:

  • if a is small, the best solution is to request each (record, field) pair independently, and the total cost is: k * (a + 1) = g * n * p * (a + 1)
  • if a is large, the best solution is to request the whole Cartesian product, and the total cost is : a + n * p
  • the second solution is better as soon as g > g_min = 1/ (a+1) * (1 + 1 / (n * p))
  • of course the orders in the Cartesian products are unimportant, so I can transpose the rows and the columns of my matrix to make it more easily coverable, for example:
   f1 f2 f3
r1  x    x
r2     x 
r3  x    x

can be reordered as

   f1 f3 f2
r1  x  x
r3  x  x
r2       x

And there is an optimal solution which is to request (f1,f3) x (r1,r3) + (f2) x (r2)

  • Trying all the solutions and looking for the lower cost is not an option, because the combinatorics explode:
for each permutation on rows: (n!)
   for each permutation on columns: (p!)
       for each possible covering of the n x p matrix: (time unknown, but large...)
           compute cost of the covering

so I am looking for an approximate solution. I already have some kind of greedy algorithm that finds a covering given a matrix (it begins with unitary cells, then merges them if the proportion of empty cell in the merge is below some threshold).

To put some numbers in minds, my n is somewhere between 1 and 1000, and my p somewhere between 1 and 200. The coverage pattern is really 'blocky', because the records come in classes for which the fields asked are similar. Unfortunately I can't access the class of a record...

Question 1: Has someone an idea, a clever simplification, or a reference for a paper that could be useful? As I have a lot of requests, an algorithm that works well on average is what I am looking for (but I can't afford it to work very poorly on some extreme case, for example requesting the whole matrix when n and p are large, and the request is indeed quite sparse).

Question 2: In fact, the problem is even more complicated: the cost is in fact more like the form: a + n * (p^b) + c * n' * p', where b is a constant < 1 (once a record is asked for a field, it is not too costly to ask for other fields) and n' * p' = n * p * (1 - g) is the number of cells I don't want to request (because they are invalid, and there is an additional cost in requesting invalid things). I can't even dream of a rapid solution to this problem, but still... an idea anyone?

like image 221
LeMiz Avatar asked Sep 10 '09 07:09

LeMiz


1 Answers

Selecting the submatrices to cover the requested values is a form of the set covering problem and hence NP complete. Your problem adds to this already hard problem that the costs of the sets differ.

That you allow to permutate the rows and columns is not such a big problem, because you can just consider disconnected submatrices. Row one, columns four to seven and row five, columns four two seven are a valid set because you can just swap row two and row five and obtain the connected submatrix row one, column four to row two, column seven. Of course this will add some constraints - not all sets are valid under all permutations - but I don't think this is the biggest problem.

The Wikipedia article gives the inapproximability results that the problem cannot be solved in polynomial time better then with a factor 0.5 * log2(n) where n is the number of sets. In your case 2^(n * p) is a (quite pessimistic) upper bound for the number of sets and yields that you can only find a solution up to a factor of 0.5 * n * p in polynomial time (besides N = NP and ignoring the varying costs).

An optimistic lower bound for the number of sets ignoring permutations of rows and columns is 0.5 * n^2 * p^2 yielding a much better factor of log2(n) + log2(p) - 0.5. In consequence you can only expect to find a solution in your worst case of n = 1000 and p = 200 up to a factor of about 17 in the optimistic case and up to a factor of about 100.000 in the pessimistic case (still ignoring the varying costs).

So the best you can do is to use a heuristic algorithm (the Wikipedia article mentions an almost optimal greedy algorithm) and accept that there will be case where the algorithm performs (very) bad. Or you go the other way and use an optimization algorithm and try to find a good solution be using more time. In this case I would suggest trying to use A* search.

like image 195
Daniel Brückner Avatar answered Nov 16 '22 03:11

Daniel Brückner