Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Which algorithms are hard to implement in functional languages?

I'm dabbling in functional languages and I have found some algorithms (especially those that use dynamic programming) harder to write and sometimes less efficient in worst case runtime. Is there a class of algorithms which are less efficient in functional languages with immutable variables and so side effects?

And is there a reference that someone can point me to that will help with the more difficult to write algorithms (maybe those optimized by shared state)?

Thanks

like image 488
GTDev Avatar asked Jun 12 '12 06:06

GTDev


People also ask

Is functional programming more difficult?

Yes, functional programming tends to be difficult for many people to comprehend (I'd tend to say, especially those who've already been exposed to procedural programming first).

Are algorithms same for all languages?

Are data structures and algorithms the same for all languages? Yes, the concepts for data structures and algorithms are same for all languages.


1 Answers

First off, as you may or may not be aware, some languages, including Haskell, implement sharing, which alleviates some of the problem that you might think of.

While Andrew's answer points at Turing completeness, it doesn't really answer the question of what algorithms are hard to implement in functional languages. Instead of asking what algorithms are hard to implement in functional languages, people typically ask what data structures are hard to implement in functional languages.

The simple answer to this: things that involve pointers.

There's no functional equivalent to pointers when you drill down to the machine level, there's a map, and you can compile certain data structures safely down to arrays or things implemented as pointers, but at a high level, you can just can't express things using pointer based data structures as easily as you can in imperative languages.

To get around this, a number of things have been done:

  • Since pointers form the basis for a hash table, and since hash tables really just implement a map, efficient functional maps have been studied comprehensively. In fact, Chris Okasaki has a book ("Purely Functional Data Structures") that details many, many ways to implement functional maps, deques, etc...
  • Since pointers can be used to represent nodes inside the traversal of some larger data structure, there has also been work in this area. The product is the zipper, an efficient structure that succinctly represents the functional equivalent of the "pointer inside of a deeper structure" technique.
  • Since pointers can be used to implement side effecting computations, monads have been used to express this kind of computation in a pretty way. Because keeping track of state is difficult to juggle around, one use for monads is to let you mask an ugly imperative behaving part of your program and use the type system to make sure that the program is chained together correctly (through monadic binds).

While I'd like to say that any algorithm can be translated from an imperative one to a functional one very easily, this is simply not the case. However, I'm fairly convinced that the problem isn't the algorithms per se, but the data structures they manipulate, being based on an imperative notion of state. You can find a long list of functional data structures in this post.

The flip side to all of this is that if you start using a more purely functional style, much of the complexity in your program goes down, and many needs for heavily imperative data structures disappear (for example, a very common use of pointers in imperative languages is to implement nasty design patterns, which usually translate into clever uses of polymorphism and typeclases at the functional level).

EDIT: I believe the essence of this question deals with how to express computation in a functional manner. However, it should be noted that there are ways of defining stateful computation in a functional way. Or rather, it is possible to use functional techniques to reason about stateful computation. For example, the Ynot project does this using a parameterized monad where facts about the heap (in the form of separation logic) are tracked by the monadic binds.

By the way, even in ML, I don't see why dynamic programming is that hard. It seems like dynamic programming problems, which usually build up collections of some sequence to compute a final answer, can accumulate the constructed values via arguments to the function, perhaps requiring a continuation in some circumstances. Using tail recursion there's no reason this can't be just as pretty and efficient as in imperative languages. Now sure, you may run into the argument that if these values are lists (for example), imperatively we can implement them as arrays, but for that, see the content of the post proper :-)

like image 107
Kristopher Micinski Avatar answered Oct 14 '22 20:10

Kristopher Micinski