Can a typical Lisp dialect solve problems using the bottom-up "dynamic programming" approach?
(Please note: I'm not talking about "memoization" which, as far as I understand, is trivial using any Lisp dialect. I'm really talking about bottom-up Dynamic Programming, where you build, for an example, your array bottom up and then use the elements you just introduced to compute the next ones.)
For example, using dynamic programming, the "0-1 knapsack" problem can be solved in pseudo-polynomial time for inputs on which any other method would fail.
An imperative (incomplete) solution is:
for (int k = 1; k <= a.length; k++) {
for (int y = 1; y <= b; y++) {
if (y < a[k-1]) {
knap[k][y-1] = knap[k-1][y-1];
} else {
if (y > a[k-1]) {
knap[k][y-1] = Math.max(knap[k-1][y-1], knap[k-1][y-1-a[k-1]] + c[k-1]);
} else {
knap[k][y-1] = Math.max(knap[k-1][y-1], c[k-1]);
}
}
Is such a thing possible to do in the various Lisp dialects? If no, why not?
Of course this is possible. The only things you need are arrays, integers and a loop construct. E.g., in Scheme, your algorithm can be transcribed using vectors. The main problem is that it becomes hard to read, since knap[k-1][y-1]
becomes (vector-ref (vector-ref knap (- k 1)) (- y 1))
and
knap[k][y-1] = knap[k-1][y-1];
becomes
(vector-set! (vector-ref knap k) (- y 1)
(vector-ref (vector-ref knap (- k 1)) (- y 1)))
(or a hairy trick with moduli), while memoized recursions just remain readable.
Speaking from experience, I recommend you stick to the memoization when programming in Lisp and similar languages. If you use hash tables, the expected asymptotic time and space complexity are the same for memoization and DP.
the short answer is yes, Clojure can work directly with java arrays so a direct translation is very straitforward
(for [k (range 1 (count a))
y (range 1 b)]
(if (< y (aget a (dec k)))
(aset knap k (dec y) (aget knap (dec k) (dec y))
(if (> y (aget a (dec k)))
(aset knap k (dec y) (max (aget knap (dec k) (dec y))
(aget knap (dec k) (+ (- y 1 (aget a (dec k)))
(aget c (dec k))))
(aset knap k (dec y) (max (aget knap (dec k) (dec y))
(aget c (dec k))))))))))
this is not very idomatic Clojure because it combines the looping with the work to be done. The resulting code will be much cleaner and easier to show correct if you seperate the elements of this loop out.
As a trivial first step, If we seperate the looping from the 'work' then we get.
(defn edit-array [k y]
(if (< y (aget a (dec k)))
(aset knap k (dec y) (aget knap (dec k) (dec y))
(if (> y (aget a (dec k)))
(aset knap k (dec y) (max (aget knap (dec k) (dec y))
(aget knap (dec k) (+ (- y 1 (aget a (dec k)))
(aget c (dec k))))
(aset knap k (dec y) (max (aget knap (dec k) (dec y))
(aget c (dec k))))))))))
(for [k (range 1 (count a)) y (range 1 b)]
(edit-array k y))
Then we can test edit array from the repl and convince our selves that it works (and write a unit test perhaps). After that, perhaps you would like to start to look at edit-array
more closely and decide if it is possible to break this into steps that are easier to test independently. Perhaps you could change this to use a functional style instead of mutating an array. Here I will move away from your specific problem because I have to admit that I dont understand the original problem this linear programming solution was designed to solve.
(defn finished? [row] ... determine if we have reached the final state ...)
(defn next-row [current-row]
(for [y (range 1 (count current-row)]
... code to produce a new vector based on the previous one ...))
(take-while #(not (finished? %) (iterate next-row (initial-state)))
The basic notion of what I see Idomatic Clojure code looking like is to decompose the problem into simple (does only one thing) abstractions and then compose them to solve the main problem. Of course this must always be tailored to suit the problem at hand.
Pardon me for saying this, but the wikipedia page you refer to is (imnsho) not very well-written. In particular, it more or less fabricates the dichotomy between top-down and bottom-up dynamic programming, and goes on to describe one as "more interesting". The only difference between the two is the order in which the table is constructed. Memoization gives rise to both of these, depending on the order in which the calls are made.
Apologies in advance to whoever wrote this section of the page; I appreciate your effort, I just think that the section needs some work.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With