How can I efficiently implement a list data structure where I can have 2 views to the head and end of the list, that always point to a head a tail of a list without expensive calls to reverse. i.e:
start x = [] end x = reverse start -- [] start1 = [1,2,3] ++ start end start1 -- [3,2,1]
end should be able to do this without invoking 'reverse' but simply looking at the given list from the perspective of the list being in reverse automatically. The same should hold if I create new lists from concatenations to start.
You could always just use Data.Sequence
.
Alternatively, a well-known implementation of a purely functional queue is to use two lists. One for enqueue and another for dequeue. Enqueue would simply cons with the enqueue list. Dequeue takes the head of the dequeue list. When the dequeue list is shorter than the enqueue list, refill it by reversing the enqueue list. See Chris Okasaki's Purely Functional Datastructures.
Even though this implementation uses reverse
, the amortized time cost of this is insignificant asymptotically. It works out so that for every enqueue, you incur a time debt of Θ(1) for the dequeue list refill. The expected time of a dequeue is therefore at most twice that of an enqueue. This is a constant factor, so the worst-case cost of both operations is O(1).
This question appears as the third result on the first page while I google Haskell queue
, but the information previously given is misleading. So, I feel there is a need to clarify a few things. (And the first search result is a blog post which contains a careless implementation...)
Everything below is basically from Okasaki's paper, Simple and efficient purely functional queues and deques in 1995 or his book.
Okay, let's begin.
A persistent queue implementation with amortised O(1) time complexity is possible. The trick is to reverse the list representing the rear part of a queue as long as the front part is long enough to amortise the cost of reverse
operation. So, instead of reversing the rear part when the front part is empty, we reverse it when the front part is shorter than the rear part. The following code is from the appendix of Okasaki's book
data BQueue a = BQ !Int [a] !Int [a] check :: Int -> [a] -> Int -> [a] -> BQueue a check lenf fs lenr rs = if lenr <= lenf then BQ lenf fs lenr rs else BQ (lenr+lenf) (fs ++ reverse rs) 0 [] head :: BQueue a -> a head (BQ _ [] _ _) = error "empty queue" head (BQ _ (x:_) _ _) = x (|>) :: BQueue a -> a -> BQueue a (BQ lenf fs lenr rs) |> x = check lenf fs (lenr + 1) (x:rs) tail :: BQueue a -> BQueue a tail (BQ lenf (x:fs) lenr rs) = check (lenf-1) fs lenr rs
And why is this amortised O(1) even used persistently? Haskell is lazy, so reverse rs
does not actually happen until it is needed. To force reverse rs
, it has to take |fs
| steps before reaching the reverse rs
. If we repeat tail
before reaching the suspension reverse rs
, then the result will be memorised so at the second time it takes only O(1). On the other hand, if we use the version before placing the suspension fs ++ reverse rs
, then again it has to go through fs
steps before reaching reverse rs
. A formal proof using (modified) Banker's method is in Okasaki's book.
The answer by @Apocalisp
When the dequeue list is empty, refill it by reversing the enqueue list
is the implementation in Ch 5 of his book with a warning in the very beginning
Unfortunately, the simple view of amortization presented in this chapter breaks in the presence of persistence
Okasaki described his amortised O(1) persistent queue in Ch 6.
So far, we have been talking about amortised time complexity only. It is actually possible to eliminate amortisation completely to achieve the worst-case O(1) time complexity for persistent queue. The trick is that reverse
has to be forced incrementally every time a de/enqueue is called. The actual implementation is a bit hard to explain here, though.
Again, everything is in his paper already.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With