Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How is FRP handled in terms of memory?

Reading about FRP (Functional Reactive Programming) I'm amazed about how intuitive and logical it seems compared to the standard imperative approach; one thing however puzzles me.. How doesn't the computer immediately run out of memory doing it?

From what I've gathered from [here], is that in FRP the complete history (past, present, and future) of a value's change is first class. That notion immediately rings an alarm in my head saying it has got to eat up your memory very fast if it's used in an environment where the past of the value isn't cleared from memory immediately.

Reading about [Fran], I've noticed several of the examples having recursively defined functions with no termination condition. If the function never terminates and returns its value to the function calling it, how is it ever going to get anything done? Or for that matter, how's it not blowing the stack after a while? Even a lazy language like Haskell will run into stack overflows at some point.

An explanation of these things would be greatly appreciated, as it completely baffles me.

like image 545
Electric Coffee Avatar asked Dec 29 '14 21:12

Electric Coffee


2 Answers

The fact that this can work for simple cases should not be much of a surprise: we already comfortably use infinite data structures in Haskell thanks to laziness and garbage collection. As long as your final result does not depend on having all your values at once, they can be collected as you go along or not forced in the first place.

This is why this classical Fibonacci example runs in constant¹ space: previous entries in the list are not needed once the next two are calculated, so they are collected as you go along—as long as you do not have any other pointers to the list.

fib n = fibs !! n
  where fibs = 0 : 1 : zipWith (+) fibs (drop 1 fibs)

Try running this function for different inputs and looking at memory usage. (Run it with +RTS -s.)

(If you want a more detailed explanation with diagrams, take a look at this post I wrote.)

The point is, even if an unbounded amount of information is available to the programmer, we can still garbage collect most of it if nothing else depends on it.

Exactly the same logic can be used to implement FRP programs efficiently.

Of course, everything is not that easy. In the fibs example, the memory usage would go way up if we had an active pointer to the beginning of the fibs list. The same thing happens with FRP if you have a computation that depends on too much past data: it's called a time leak.

Dealing with time leaks is one of the open problems with implementing an efficient, well-behaved FRP framework. It's difficult to provide expressive FRP abstractions without allowing the possibility of poor or even catastrophic memory usage. I believe most current approaches end up providing abstract FRP types along with a blessed set of operations that is less likely to cause these sorts of leaks; a particularly extreme form of this is Arrowized FRP which does not provide a behavior/signal type at all but rather expresses everything with transformations between signals (as arrows).

I've never tried to implement a nice FRP system myself, so I can't really explain the problems in any more detail. If you're interested in more details on this topic, a great place to look is Conal Elliott's blog—with this post as a good starting point. You can also take a look at some of the papers he's written like "Push-Pull Functional Reactive Programming" as well as other papers on the subject, including some about Arrowized FRP like "Functional Reactive Programming, Continued" (chosen almost at random).

footnotes

¹ It's not really constant space because the intermediate results get bigger themselves. But it should maintain a constant number of list cells in memory.

like image 68
Tikhon Jelvis Avatar answered Oct 05 '22 10:10

Tikhon Jelvis


About the time leaks part of your question: this is indeed one of the main challenges in implementing FRP. However, FRP researchers and implementers have found several ways to avoid them.

It all depends on the precise API you offer for signals. The main question is whether or not you provide higher-order FRP. This often takes the form of a "monadic join" primitive for signals: a way to convert a signal of signals into a signal, or in other words an API to produce a signal that dynamically switches between a number of other signals. Such an API is very powerful, but can introduce the potential for time leaks, i.e. the problem that you ask about: the need to keep all of a signal's previous values in memory. However, as Heinrich Apfelmus mentions in a comment to a previous answer, there are ways to solve this by restricting higher-order APIs in certain ways, using the type system or otherwise. See that comment for links to further explanations.

Many FRP libraries simply do not offer higher-order APIs and thus (quite easily) avoid the problem of time leaks. You mentioned Elm, which is in this case, as mentioned here under "Signals are not monads in Elm". This does come at the cost of expressiveness, because no powerful monadic API is offered, but not everyone believes you need the general power of such an API in an FRP framework/library.

Finally, I recommend an interesting presentation by Elm's main author Evan Czaplicki who does a very good job of explaining these problems and providing an overview of possible ways to solve them. He categorizes FRP approaches according to how they solve them.

like image 36
Dominique Devriese Avatar answered Oct 05 '22 12:10

Dominique Devriese