The classic algorithm books (TAOCP, CLR) (and not so classic ones, such as the fxtbook)are full of imperative algorithms. This is most obvious with algorithms whose implementation is heavily based on arrays, such as combinatorial generation (where both array index and array value are used in the algorithm) or the union-find algorithm.
The worst-case complexity analysis of these algorithms depends on array accesses being O(1). If you replace arrays with array-ish persistent structures, such as Clojure does, the array accesses are no longer O(1), and the complexity analysis of those algorithms is no longer valid.
Which brings me to the following questions: is pure functional programming incompatible with the classical algorithms literature?
Definition. In programming, a pure function is a function that has the following properties: The function always returns the same value for the same inputs. Evaluation of the function has no side effects.
Efficiency issuesFunctional programming languages are typically less efficient in their use of CPU and memory than imperative languages such as C and Pascal. This is related to the fact that some mutable data structures like arrays have a very straightforward implementation using present hardware.
From HaskellWiki. A function is called pure if it corresponds to a function in the mathematical sense: it associates each possible input value with an output value, and does nothing else.
With respect to data structures, Chris Okasaki has done substantial research into adopting classic data structures into a purely functional setting, as many of the standard data structures no longer work when using destructive updates. His book "Purely Functional Data Structures" shows how some structures, like binomial heaps and red/black trees, can be implemented quite well in a functional setting, while other basic structures like arrays and queues must be implemented with more elaborate concepts.
If you're interested in pursuing this branch of the core algorithms, his book would be an excellent starting point.
The short answer is that, so long as the algorithm does not have effects that can be observed after it finishes (other than what it returns), then it is pure. This holds even when you do things like destructive array updates or mutation.
If you had an algorithm like say:
function zero(array): ix <- 0 while(ix < length(array)): array[ix] <- 0 ix <- ix+1 return array
Assuming our pseudocode above is lexically scoped, so long as the array parameter is first copied over and the returned array is a wholly new thing, this algorithm represents a pure function (in this case, the Haskell function fmap (const 0)
would probably work). Most "imperative" algorithms shown in books are really pure functions, and it is perfectly fine to write them that way in a purely functional setting using something like ST.
I would recommend looking at Mercury or the Disciple Disciplined Compiler to see pure languages that still thrive on destruction.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With