I found this statement while studying Functional Reactive Programming, from "Plugging a Space Leak with an Arrow" by Hai Liu and Paul Hudak ( page 5) :
Suppose we wish to define a function that repeats its argument indefinitely: repeat x = x : repeat x or, in lambdas: repeat = λx → x : repeat x This requires O(n) space. But we can achieve O(1) space by writing instead: repeat = λx → let xs = x : xs in xs
The difference here seems small but it hugely prompts the space efficiency. Why and how it happens ? The best guess I've made is to evaluate them by hand:
r = \x -> x: r x r 3 -> 3: r 3 -> 3: 3: 3: ........ -> [3,3,3,......]
As above, we will need to create infinite new thunks for these recursion. Then I try to evaluate the second one:
r = \x -> let xs = x:xs in xs r 3 -> let xs = 3:xs in xs -> xs, according to the definition above: -> 3:xs, where xs = 3:xs -> 3:xs:xs, where xs = 3:xs
In the second form the xs
appears and can be shared between every places it occurring, so I guess that's why we can only require O(1)
spaces rather than O(n)
. But I'm not sure whether I'm right or not.
BTW: The keyword "shared" comes from the same paper's page 4:
The problem here is that the standard call-by-need evaluation rules are unable to recognize that the function:
f = λdt → integralC (1 + dt) (f dt)
is the same as:
f = λdt → let x = integralC (1 + dt) x in x
The former definition causes work to be repeated in the recursive call to f, whereas in the latter case the computation is shared.
Iterative algorithms and methods are generally more efficient than recursive algorithms. Recursion is based on two key problem solving concepts: divide and conquer and self-similarity. A recursive solution solves a problem by solving a smaller instance of the same problem.
To conclude, space complexity of recursive algorithm is proportinal to maximum depth of recursion tree generated. If each function call of recursive algorithm takes O(m) space and if the maximum depth of recursion tree is 'n' then space complexity of recursive algorithm would be O(nm).
Bottom-up. Sometimes the best way to improve the efficiency of a recursive algorithm is to not use recursion at all. In the case of generating Fibonacci numbers, an iterative technique called the bottom-up approach can save us both time and space.
A recursive algorithm is an algorithm which calls itself with "smaller (or simpler)" input values, and which obtains the result for the current input by applying simple operations to the returned value for the smaller (or simpler) input.
It's easiest to understand with pictures:
The first version
repeat x = x : repeat x
creates a chain of (:)
constructors ending in a thunk which will replace itself with more constructors as you demand them. Thus, O(n) space.
The second version
repeat x = let xs = x : xs in xs
uses let
to "tie the knot", creating a single (:)
constructor which refers to itself.
Put simply, variables are shared, but function applications are not. In
repeat x = x : repeat x
it is a coincidence (from the language's perspective) that the (co)recursive call to repeat is with the same argument. So, without additional optimization (which is called static argument transformation), the function will be called again and again.
But when you write
repeat x = let xs = x : xs in xs
there are no recursive function calls. You take an x
, and construct a cyclic value xs
using it. All sharing is explicit.
If you want to understand it more formally, you need to familiarize yourself with the semantics of lazy evaluation, such as A Natural Semantics for Lazy Evaluation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With