Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does ArrowLoop work? Also, mfix?

I'm fairly comfortable now with the rest of the arrow machinery, but I don't get how loop works. It seems magical to me, and that's bad for my understanding. I also have trouble understanding mfix. When I look at a piece of code that uses rec in a proc or do block, I get confused. With regular monadic or arrow code, I can step through the computation and keep an operational picture of what's going on in my head. When I get to rec, I don't know what picture to keep! I get stuck, and I can't reason about such code.

The example I'm trying to grok is from Ross Paterson's paper on arrows, the one about circuits.

counter :: ArrowCircuit a => a Bool Int counter = proc reset -> do         rec     output <- returnA -< if reset then 0 else next                 next <- delay 0 -< output+1         returnA -< output 

I assume that if I understand this example, I'll be able to understand loop in general, and it'll go a great way towards understanding mfix. They feel essentially the same to me, but perhaps there is a subtlety I'm missing? Anyway, what I would really prize is an operational picture of such pieces of code, so I can step through them in my head like 'regular' code.

Edit: Thanks to Pigworker's answer, I have started thinking about rec and such as demands being fulfilled. Taking the counter example, the first line of the rec block demands a value called output. I imagine this operationally as creating a box, labelling it output, and asking the rec block to fill that box. In order to fill that box, we feed in a value to returnA, but that value itself demands another value, called next. In order to use this value, it must be demanded of another line in the rec block but it doesn't matter where in the rec block it is demanded, for now.

So we go to the next line, and we find the box labelled next, and we demand that another computation fill it. Now, this computation demands our first box! So we give it the box, but it has no value inside it, so if this computation demands the contents of output, we hit an infinite loop. Fortunately, delay takes the box, but produces a value without looking inside the box. This fills next, which then allows us to fill output. Now that output is filled, when the next input of this circuit is processed, the previous output box will have its value, ready to be demanded in order to produce the next next, and thus the next output.

How does that sound?

like image 658
danharaj Avatar asked Aug 08 '11 01:08

danharaj


1 Answers

In this code, they key piece is the delay 0 arrow in the rec block. To see how it works, it helps to think of values as varying over time and time as chopped into slices. I think of the slices as ‘days’. The rec block explains how each day's computation works. It's organised by value, rather than by causal order, but we can still track causality if we're careful. Crucially, we must make sure (without any help from the types) that each day's work relies on the past but not the future. The one-day delay 0 buys us time in that respect: it shifts its input signal one day later, taking care of the first day by giving the value 0. The delay's input signal is ‘tomorrow's next’.

rec     output <- returnA -< if reset then 0 else next         next <- delay 0 -< output+1 

So, looking at the arrows and their outputs, we're delivering today's output but tomorrow's next. Looking at the inputs, we're relying on today's reset and next values. It's clear that we can deliver those outputs from those inputs without time travel. The output is today's next number unless we reset to 0; tomorrow, the next number is the successor of today's output. Today's next value thus comes from yesterday, unless there was no yesterday, in which case it's 0.

At a lower level, this whole setup works because of Haskell's laziness. Haskell computes by a demand-driven strategy, so if there is a sequential order of tasks which respects causality, Haskell will find it. Here, the delay establishes such an order.

Be aware, though, that Haskell's type system gives you very little help in ensuring that such an order exists. You're free to use loops for utter nonsense! So your question is far from trivial. Each time you read or write such a program, you do need to think ‘how can this possibly work?’. You need to check that delay (or similar) is used appropriately to ensure that information is demanded only when it can be computed. Note that constructors, especially (:) can act like delays, too: it's not unusual to compute the tail of a list, apparently given the whole list (but being careful only to inspect the head). Unlike imperative programming, the lazy functional style allows you to organize your code around concepts other than the sequence of events, but it's a freedom that demands a more subtle awareness of time.

like image 110
pigworker Avatar answered Oct 04 '22 15:10

pigworker