Let's say that we have a recursive data-structure, like a binary tree. There are many ways to traverse it, and they have different memory-usage profiles. For instance, if we were to simply print the value of each node, using pseudo-code like the following in-order traversal...
visitNode(node) {
if (node == null) return;
visitNode(node.leftChild);
print(node.value);
visitNode(node.rightChild);
}
...our memory usage would be constant, but due to the recursive calls, we would increase the size of the call stack. On very large trees, this could potentially overflow it.
Let's say that we decided to optimize for call-stack size; assuming that this language is capable of proper tailcalls, we could rewrite this as the following pre-order traversal...
visitNode(node, nodes = []) {
if (node != null) {
print(node.value);
visitNode(nodes.head, nodes.tail + [node.left, node.right]);
} else if (node == null && nodes.length != 0 ) {
visitNode(nodes.head, nodes.tail);
} else return;
}
While we would never blow the stack, we would now see heap usage increase linearly with respect to the size of the tree.
Let's say we were then to attempt to lazily traverse the tree - here is where my reasoning gets fuzzy. I think that even using a basic lazy evaluation strategy, we would grow memory at the same rate as the tailcall optimized version. Here is a concrete example using Scala's Stream class, which provides lazy evaluation:
sealed abstract class Node[A] {
def toStream: Stream[Node[A]]
def value: A
}
case class Fork[A](value: A, left: Node[A], right: Node[A]) extends Node[A] {
def toStream: Stream[Node[A]] = this #:: left.toStream.append(right.toStream)
}
case class Leaf[A](value: A) extends Node[A] {
def toStream: Stream[Node[A]] = this #:: Stream.empty
}
Although only the head of the stream is strictly evaluated, anytime the left.toStream.append(right.toStream)
is evaluated, I think this would actually evaluate the head of both the left and right streams. Even if it doesn't (due to append cleverness), I think that recursively building this thunk (to borrow a term from Haskell) would essentially grow memory at the same rate. Rather than saying, "put this node in the list of nodes to traverse", we're basically saying, "here's another value to evaluate that will tell you what to traverse next", but the outcome is the same; linear memory growth.
The only strategy I can think of that would avoid this is having mutable state in each node declaring which paths have been traversed. This would allow us to have a referentially transparent function that says, "Given a node, I will tell you which single node you should traverse next", and we could use that to build an O(1) iterator.
Is there another way to accomplish O(1), tailcall optimized traversal of a binary tree, possibly without mutable state?
Is there another way to accomplish O(1), tailcall optimized traversal of a binary tree, possibly without mutable state?
As I stated in my comment, you can do this if the tree need not survive the traversal. Here's a Haskell example:
data T = Leaf | Node T Int T
inOrder :: T -> [Int]
inOrder Leaf = []
inOrder (Node Leaf x r) = x : inOrder r
inOrder (Node (Node l x m) y r) = inOrder $ Node l x (Node m y r)
This takes O(1) auxiliary space if we assume the garbage collector will clean up any Node
that we just processed, so we effectively replace it by a right-rotated version. However, if the nodes we process cannot immediately be garbage-collected, then the final clause may build up an O(n) number of nodes before it hits a leaf.
If you have parent pointers, then it's also doable. Parent pointers require mutable state, though, and prevent sharing of subtrees, so they're not really functional. If you represent an iterator by a pair (cur, prev)
that is initially (root, nil)
, then you can perform iteration as outlined here. You need a language with pointer comparisons to make this work, though.
Without parent pointers and mutable state, you need to maintain some data structure that at least tracks where the root of the tree is and how to get there, since you'll need such a structure at some point during in-order or post-order traversal. Such a structure necessarily takes Ω(d) space where d is the depth of the tree.
A fancy answer.
We can use free monads to get efficient memory utilization bound.
{-# LANGUAGE RankNTypes
, MultiParamTypeClasses
, FlexibleInstances
, UndecidableInstances #-}
class Algebra f x where
phi :: f x -> x
A algebra of a functor f
is a function phi
from f x
to x
for some x
. For example, any monad has a algebra for any object m x
:
instance (Monad m) => Algebra m (m x) where
phi = join
A free monad for any functor f
can be constructed (possibly, some sort of functors only, like omega-cocomplete, or some such; but all Haskell types are polynomial functors, which are omega-cocomplete, so the statement is certainly true for all Haskell functors):
data Free f a = Free (forall x. Algebra f x => (a -> x) -> x)
runFree g (Free m) = m g
instance Functor (Free f) where
fmap f m = Free $ \g -> runFree (g . f) m
wrap :: (Functor f) => f (Free f a) -> Free f a
wrap f = Free $ \g -> phi $ fmap (runFree g) f
instance (Functor f) => Algebra f (Free f a) where
phi = wrap
instance (Functor f) => Monad (Free f) where
return a = Free ($ a)
m >>= f = fjoin $ fmap f m
fjoin :: (Functor f) => Free f (Free f a) -> Free f a
fjoin mma = Free $ \g -> runFree (runFree g) mma
Now we can use Free
to construct free monad for functor T a
:
data T a b = T a b b
instance Functor (T a) where
fmap f (T a l r) = T a (f l) (f r)
For this functor we can define a algebra for object [a]
instance Algebra (T a) [a] where
phi (T a l r) = l++(a:r)
A tree is a free monad over functor T a
:
type Tree a = Free (T a) ()
It can be constructed using the following functions (if defined as ADT, they'd be constructor names, so nothing extraordinary):
tree :: a -> Tree a -> Tree a -> Tree a
tree a l r = phi $ T a l r -- phi here is for Algebra f (Free f a)
-- and translates T a (Tree a) into Tree a
leaf :: Tree a
leaf = return ()
To demonstrate how this works:
bar = tree 'a' (tree 'b' leaf leaf) $ tree 'r' leaf leaf
buz = tree 'b' leaf $ tree 'u' leaf $ tree 'z' leaf leaf
foo = tree 'f' leaf $ tree 'o' (tree 'o' leaf leaf) leaf
toString = runFree (\_ -> [] :: String)
main = print $ map toString [bar, buz, foo]
As runFree traverses the tree to replace leaf ()
with []
, the algebra for T a [a]
in all contexts is the algebra that constructs a string representing in-order traversal of the tree. Because functor T a b
constructs a new tree as it goes, it must have the same memory consumption characteristics as the solution quoted by larsmans - if the tree is not kept in memory, the nodes are discarded as soon as they are replaced by the string representing the whole subtree.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With