I've got a scenario that you can envision this way:
Start off with an image that is 100 pixels wide by 1000 pixels tall. In addition to that image, you have a set of cropped sections of that image. Each section is 100 pixels wide by 100 pixels tall. The part of the image contained in the section varies. For example, you might have one that starts at the very top (pixel 0), then one at vertical pixel 3, then one at vertical pixel 9 and so on.
What I need to do is find a way to look at those set of smaller pictures and pick out the smallest number of sections that would give me the most coverage of the original image.
A couple of notes:
Can anyone point me in the right direction here? I can do this sort of brute force... but I assume there's a better way.
I'm sorry, but I fail to see why this problem is NP-hard.
The general idea is that you'll remove iteratively bottom parts of your image by selecting the "best" section, that is
Begin by sorting the sections. You'll get something like (0,1,3,10,...,988,999) where 0 corresponds to a section that begins at the top pixel. (And the one corresponding to 999 covers only one line)
Suppose your original image is 100xN. Initially, N=1000.
Let n be the index of the image that best covers the end of the original image : i.e n is the smallest number in that list such that n+100>=N. If there is no such number, n is simply the biggest number.
If your sorted list is (0,1,...899, 900, 901,..,999) then n=900
If your sorted list is (0,1,...899, 905, 910,..,999) then n=905
If your sorted list is (0,1,...,888,898,) then n=898
Then start again with N=n (you've removed a part of the bottom of the original image) (of course, remove from the sorted list all the sections that are ">=n")
I think that setting fixed-height sections (100 pixels) removes the NP-hardness.
I think this is http://en.wikipedia.org/wiki/Maximum_coverage_problem -- The elements of the sets are pixels (you can write the code such that it doesn't deal with things pixel-by-pixel).
Because it is 100x1000, the problem is no longer NP-hard, probably in P even. A greedy approach will not work, but there exists a dynamic programming solution as follows, which works roughly in O(N)
time if sufficiently spread out, otherwise O(N * max_overlap_#)
. The trick is to go "forwards and backwards".
input:
[ ] to fill
[ (] ) { ([}) ] ( [) ]
return:
Result set of squares which maximize cover, secondarily
minimizing the size of the Result set if covered areas are equal
the score of the leftmost element is {area:100^2, num:1}
for each square S in order, left->right:
(Assuming you pick S in Result...)
let Candidates = {all squares which overlap S and are left of S}
+ {first non-overlapping square left of S}
for each candidate C:
let score(S) = score(C) + {area:new_area_covered_by_S, num:1}
pick candidate BestC which maximizes score(S)
draw a line between S and BestC
Take the square with the best score, and work your way backwards
along the chain of best-picks, returning only those squares.
This assumes you will add an extra square even for an extra 0.0001% coverage, i.e. "at every point, if it is possible to cover it with a square, it must be covered with a square". You can modify this algorithm though to trade off appropriately.
This further assumes it is not the case that nearly all the squares are overlapping each other at a single point (they are somewhat spread out but may still overlap); otherwise it may take a long time.
Also note that you may divide the problem into subproblems whenever you have a break that is unfilled by a square.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With