Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Efficient recursion in functional programming vs. inefficient recursion in different paradigms

As far as I know recursion is very elegant but unefficient in OOP and procedural programming (see the wonderful "High Order perl", Mark Jason Dominus). I had some informations that in functional programming recursion is fast - keeping its elegance and simplicity. Could someone confirm and possibly amplify this? I am thinking in terms of XSLT and Haskell (high on my next-language-to-learn list)

Thanks

Daniel

like image 413
Daniel Avatar asked Feb 26 '10 15:02

Daniel


People also ask

Why recursive functions are inefficient?

Recursive algorithms are often inefficient for small data, due to the overhead of repeated function calls and returns. For this reason efficient implementations of recursive algorithms often start with the recursive algorithm, but then switch to a different algorithm when the input becomes small.

Why recursive functions are generally inefficient when compared to iterative functions?

Recursion has a large amount of overhead as compared to Iteration. It is usually much slower because all function calls must be stored in a stack to allow the return back to the caller functions.

Why recursion solution are sometimes not as efficient as iterative solution?

Recursion can be slower than iteration because, in addition to processing the loop content, it has to deal with the recursive call stack frame, which will mean more code is run, which means it will be slower.

What is the difference between function chaining and recursion?

Method chaining also known as named parameter idiom. Use method chaining only when it is actually helpful like in case of some string methods. Recursive method needs more memory than it requires in normal case. Recursion makes the code compact but complex to understand.


1 Answers

Tail recursion is iteration in any decent functional language implementation. Here's an example using GHC Haskell. A simple program to add a sequence of numbers. It begins as the composition of several recursive functions:

import qualified Data.Vector as U

main = print (U.sum (U.enumFromTo 1 (10000000 :: Int)))

Which the compiler optimizes into a single tail recursive function (in a source-to-source transformation):

loop x y = case y <= y 10000000 of
      False -> x
      True  -> loop (x + y) (y + 1)

This recursive function is then compiled into a straight forward loop:

loop:
    .Lc216:
            cmpq $10000000,%rsi
            jle .Lc219
            movq %r14,%rbx
            movq (%rbp),%rax
            jmp *(%rax)
    .Lc219:
            addq %rsi,%r14
            incq %rsi
            jmp loop

Or with the GHC LLVM backend, additional optimizations are applied to the imperative representation of the program:

    loop:
        leaq    1(%rsi), %rax
        addq    %rsi, %r14
        cmpq    $10000001, %rax
        jge     .LBB1_5
        addq    $2, %rsi
        addq    %rax, %r14
    test:                                # %tailrecurse
        cmpq    $10000001, %rsi
        jl      loop

Note how the tail recursive label is tagged.

So we had a pipeline of recursive functions, which were compiled to a single tail recursive function, which was compiled to a single imperative loop using no stack. And 8 instructions in the end.

And that is why both function composition, and recursion, are extremely efficient in good, optimizing function languages.

like image 143
Don Stewart Avatar answered Sep 28 '22 19:09

Don Stewart