In Haskell, difference lists, in the sense of
[a] representation of a list with an efficient concatenation operation
seem to be implemented in terms of function composition.
Functions and (dynamic) function compositions, though, must also be represented somehow in the computer's memory using data structures, which raises the question of how dlists could be implemented in Haskell without using function compositions, but, rather, through some basic purely-functional node-based data structures. How could this be done with the same performance guarantees as those of function composition?
(++)
's notoriously bad asymptotics come about when you use it in a left-associative manner - that is, when (++)
's left argument is the result of another call to (++)
. Right-associative expressions run efficiently, though.
To put it more concretely: evaluating a left-nested append of m
lists like
((ws ++ xs) ++ ys) ++ zs -- m = 3 in this example
to WHNF requires you to force m
thunks, because (++)
is strict in its left argument.
case (
case (
case ws of { [] -> xs ; (w:ws) -> w:(ws ++ xs) }
) of { [] -> ys ; (x:xs) -> x:(xs ++ ys) }
) of { [] -> zs ; (y:ys) -> y:(ys ++ zs) }
In general, to fully evaluate n
elements of such a list, this'll require forcing that stack of m
thunks n
times, for O(m*n) complexity. When the entire list is built from appends of singleton lists (ie (([w] ++ [x]) ++ [y]) ++ [z]
), m = n
so the cost is O(n2).
Evaluating a right-nested append like
ws ++ (xs ++ (ys ++ zs))
to WHNF is much easier (O(1)):
case ws of
[] -> xs ++ (ys ++ zs)
(w:ws) -> w:(ws ++ (xs ++ (ys ++ zs)))
Evaluating n
elements just requires evaluating n
thunks, which is about as good as you can expect to do.
Difference lists work by paying a small (O(m)) up-front cost to automatically re-associate calls to (++)
.
newtype DList a = DList ([a] -> [a])
fromList xs = DList (xs ++)
toList (DList f) = f []
instance Monoid (DList a) where
mempty = fromList []
DList f `mappend` DList g = DList (f . g)
Now a left-nested expression,
toList (((fromList ws <> fromList xs) <> fromList ys) <> fromList zs)
gets evaluated in a right-associative manner:
((((ws ++) . (xs ++)) . (ys ++)) . (zs ++)) []
-- expand innermost (.)
(((\l0 -> ws ++ (xs ++ l0)) . (ys ++)) . (zs ++)) []
-- expand innermost (.)
((\l1 -> (\l0 -> ws ++ (xs ++ l0)) (ys ++ l1)) . (zs ++)) []
-- beta reduce
((\l1 -> ws ++ (xs ++ (ys ++ l1))) . (zs ++)) []
-- expand innermost (.)
(\l2 -> (\l1 -> ws ++ (xs ++ (ys ++ l1))) (zs ++ l2)) []
-- beta reduce
(\l2 -> ws ++ (xs ++ (ys ++ (zs ++ l2)))) []
-- beta reduce
ws ++ (xs ++ (ys ++ (zs ++ [])))
You perform O(m) steps to evaluate the composed function, then O(n) steps to evaluate the resulting expression, for a total complexity of O(m+n), which is asymptotically better than O(m*n). When the list is made up entirely of appends of singleton lists, m = n
and you get O(2n) ~ O(n), which is asymptotically better than O(n2).
This trick works for any Monoid
.
newtype RMonoid m = RMonoid (m -> m) -- "right-associative monoid"
toRM m = RMonoid (m <>)
fromRM (RMonoid f) = f mempty
instance Monoid m => Monoid (RMonoid m):
mempty = toRM mempty
RMonoid f `mappend` RMonoid g = RMonoid (f . g)
See also, for example, the Codensity monad, which applies this idea to monadic expressions built using (>>=)
(rather than monoidal expressions built using (<>)
).
I hope I have convinced you that (++)
only causes a problem when used in a left-associative fashion. Now, you can easily write a list-like data structure for which append is strict in its right argument, and left-associative appends are therefore not a problem.
data Snoc a = Nil | Snoc (Snoc a) a
xs +++ Nil = xs
xs +++ (Snoc ys y) = Snoc (xs +++ ys) y
We recover O(1) WHNF of left-nested appends,
((ws +++ xs) +++ ys) +++ zs
case zs of
Nil -> (ws +++ xs) +++ ys
Snoc zs z -> Snoc ((ws +++ xs) +++ ys) +++ zs) z
but at the expense of slow right-nested appends.
ws +++ (xs +++ (ys +++ zs))
case (
case (
case zs of { Nil -> ys ; (Snoc zs z) -> Snoc (ys +++ zs) z }
) of { Nil -> xs ; (Snoc ys y) -> Snoc (xs +++ ys) y }
) of { Nil -> ws ; (Snoc xs x) -> Snoc (ws +++ xs) y }
Then, of course, you end up writing a new type of difference list which reassociates appends to the left!
newtype LMonoid m = LMonoid (m -> m) -- "left-associative monoid"
toLM m = LMonoid (<> m)
fromLM (LMonoid f) = f mempty
instance Monoid m => Monoid (LMonoid m):
mempty = toLM mempty
LMonoid f `mappend` LMonoid g = LMonoid (g . f)
Carl hit it in his comment. We can write
data TList a = Nil | Single a | Node !(TList a) (TList a)
singleton :: a -> TList a
singleton = Single
instance Monoid (TList a) where
mempty = Nil
mappend = Node
We could get toList
by just deriving Foldable
, but let's write it out instead to see just what's going on.
instance Foldable TList where
foldMap _ Nil = mempty
foldMap f (Single a) = f a
foldMap f (Node t u) = foldMap f t <> foldMap f u
toList as0 = go as0 [] where
go Nil k = k
go (Single a) k = a : k
go (Node l r) k = go l (go r k)
toList
is O(n), where n is the total number of internal nodes (i.e., the total number of mappend
operations used to form the TList
). This should be pretty clear: each Node
is inspected exactly once. mempty
, mappend
, and singleton
are each obviously O(1).
This is exactly the same as for a DList
:
newtype DList a = DList ([a] -> [a])
singletonD :: a -> DList a
singletonD a = DList (a:)
instance Monoid (DList a) where
mempty = DList id
mappend (DList f) (DList g) = DList (f . g)
instance Foldable DList where
foldr c n xs = foldr c n (toList xs)
toList (DList f) = f []
Why, operationally, is this the same? Because, as you indicate in your question, the functions are represented in memory as trees. And they're represented as trees that look a lot like TList
s! singletonD x
produces a closure containing a (boring) (:)
and an exciting x
. When applied, it does O(1) work. mempty
just produces the id
function, which when applied does O(1) work. mappend as bs
produces a closure that, when applied, does O(1) work of its own, plus O(length as + length bs) work in its children.
The shapes of the trees produced for TList
and DList
are actually the same. You should be able to convince yourself that they also have identical asymptotic performance when used incrementally: in each case, the program has to walk down the left spine of the tree to get to the first list element.
Both DList
and TList
are equally okay when built up and then used only once. They're equally lousy when built once and converted to lists multiple times.
As Will Ness showed with a similar type, the explicit tree representation is better if you want to add support for deconstructing the representation, as you can actually get your hands on the structure. TList
can support a reasonably efficient uncons
operation (that improves the structure as it works). To get efficient unsnoc
as well, you'll need to use a fancier representation (a catenable deque). This implementation also has potentially wretched cache performance. You can switch to a cache-oblivious data structure, but it is practically guaranteed to be complicated.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With