I was reading about the subset-sums problem when I came up with what appears to be a general-purpose algorithm for solving it:
(defun subset-contains-sum (set sum)
(let ((subsets) (new-subset) (new-sum))
(dolist (element set)
(dolist (subset-sum subsets)
(setf new-subset (cons element (car subset-sum)))
(setf new-sum (+ element (cdr subset-sum)))
(if (= new-sum sum)
(return-from subset-contains-sum new-subset))
(setf subsets (cons (cons new-subset new-sum) subsets)))
(setf subsets (cons (cons element element) subsets)))))
"set" is a list not containing duplicates and "sum" is the sum to search subsets for. "subsets" is a list of cons cells where the "car" is a subset list and the "cdr" is the sum of that subset. New subsets are created from old ones in O(1) time by just cons'ing the element to the front.
I am not sure what the runtime complexity of it is, but appears that with each element "sum" grows by, the size of "subsets" doubles, plus one, so it appears to me to at least be quadratic.
I am posting this because my impression before was that NP-complete problems tend to be intractable and that the best one can usually hope for is a heuristic, but this appears to be a general-purpose solution that will, assuming you have the CPU cycles, always give you the correct answer. How many other NP-complete problems can be solved like this one?
This algorithm is polynomial in the values of A and B, which are exponential in their numbers of bits. However, Subset Sum encoded in unary is in P, since then the size of the encoding is linear in B-A. Hence, Subset Sum is only weakly NP-Complete.
Clearly, the Knapsack (Subset Sum) Problem re- duces to the 0 -1 Knapsack Problem, and thus the 0 -1 Knapsack Problem is also NP-complete.
Subset Sum is a true decision problem, not an optimization problem forced to become a decision problem. It is easy to see that Subset Sum is in NP.
NP-complete problem, any of a class of computational problems for which no efficient solution algorithm has been found. Many significant computer-science problems belong to this class—e.g., the traveling salesman problem, satisfiability problems, and graph-covering problems.
NP-complete problems are solvable, just not in polynomial time (as far as we know). That is, an NP-complete problem may have an O(n*2^n)
algorithm that could solve it, but it won't have, for example, an O(n^3)
algorithm to solve it.
Interestingly, if a quick (polynomial) algorithm was found for any NP-complete problem, then every problem in NP could be solved in polynomial time. This is what P=NP is about.
If I understand your algorithm correctly (and this is based more on your comments than on the code), then it is equivalent to the O(n*2^n)
algorithm here. There are 2^n
subsets, and since you also need to sum each subset, the algorithm is O(n*2^n)
.
One more thing about complexity - the O(whatever)
only indicates how well a particular algorithm scales. You cannot compare two algorithms and say that one is faster than the other based on this. Big-O notation doesn't care about implementation details and optimisations - it is possible to write two implementations of the same algorithm with one being much faster than the other, even though they might both be O(n^2)
. One woman making babies is an O(n)
operation, but the chances are that this is going to take a lot longer than most O(n*log(n))
sorts you perform. All you can say based on this is that sorting will be slower for very large values on n.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With