I am reading the book "Functional Programming in Javascript".
In Chapter 2 there is the following comparison between imperative/functional code for finding the first four words containing only letters in a string:
var words = [], count = 0;
text = myString.split(' ');
for (i=0; count<4, i<text.length; i++) {
if (!text[i].match(/[0-9]/)) {
words = words.concat(text[i]);
count++;
}
}
var words = [];
var words = myString.split(' ').filter(function(x){
return (! x.match(/[1-9]+/));
}).slice(0,4);
I reasoned that for any case where the length of text
is greater than four, the imperative version will be faster, since it only runs up to finding the first four words that match the criteria, while the functional version first filters the entire array and only then slices apart the first four elements.
My questions is, am I right in assuming this?
Efficiency issuesFunctional programming languages are typically less efficient in their use of CPU and memory than imperative languages such as C and Pascal. This is related to the fact that some mutable data structures like arrays have a very straightforward implementation using present hardware.
Functional programming provides advantages like efficiency, lazy evaluation, nested functions, bug-free code, parallel programming. In simple language, functional programming is to write the function having statements to execute a particular task for the application.
Functional languages will seem slower because you'll only ever see benchmarks comparing code that is easy enough to write well in C and you'll never see benchmarks comparing meatier tasks where functional languages start to excel.
In some cases (like yours) yes, but not always. Lots of functional languages like Haskell or Scala have built in laziness. Which means functions aren't evaluated immediately, but only when needed.
If you're familiar with Java 8, their Streams API is also lazy, which means something like this, will not traverse the whole stream 3 times.
stream.filter(n -> n < 200)
.filter(n -> n % 2 == 0)
.filter(n -> n > 15);
It's a very interesting concept and you can check out the documentation for the Scala Stream class here http://www.scala-lang.org/api/2.10.0/index.html#scala.collection.immutable.Stream
The comparison of these two code fragments makes perfect sense - as part of a tutorial. Functional programming is demanding and if the author doesn't confront his readers with the most efficient functional implementations, then to keep examples simple.
Why is functional programming demanding? Because it follows mathematical principles (and these don't always human logic) and because novices are accustomed to imperative style regularly. In FP the data flow has priority while the actual algorithms remain in the background. It takes time to get used to this style, but if you've done it once, you'll probably never look back!
How can you implement this example more efficiently in a functional way? There are several possibilities, of which I illustrate two. Note, that both implementations avoid intermediate arrays:
Javascript is strictly evaluated. However, lazy evaluation can be emulated with thunks (nullary functions). Furthermore, foldR
(fold right) is required as iterative function from which filterN
is derived:
const foldR = rf => acc => xs => xs.length
? rf(xs[0])(() => foldR(rf)(acc)(xs.slice(1)))
: acc;
const filterN = pred => n => foldR(
x => acc => pred(x) && --n ? [x].concat(acc()) : n ? acc() : [x]
)([]);
const alpha = x => !x.match(/[0-9]/);
let xs = ["1", "a", "b", "2", "c", "d", "3", "e"];
filterN(alpha)(4)(xs); // ["a", "b", "c", "d"]
This implementation has the disadvantage that filterN
isn't pure, because it is stateful (n
).
CPS enables a pure variant of filterN
:
const foldL = rf => acc => xs => xs.length
? rf(acc)(xs[0])(acc_ => foldL(rf)(acc_)(xs.slice(1)))
: acc;
const filterN = pred => n => foldL(
acc => x => cont => pred(x)
? acc.length + 1 < n ? cont(acc.concat(x)) : acc.concat(x)
: cont(acc)
)([]);
const alpha = x => !x.match(/[0-9]/);
let xs = ["1", "a", "b", "2", "c", "d", "3", "e"];
filterN(alpha)(4)(xs); // ["a", "b", "c", "d"]
It is a bit confusing how foldR
and foldL
differ. The difference is not in the commutativity but in the associativity. The CPS implementation has still a drawback. filterN
should be separated into filter
and takeN
, to increase code reusability.
Transducers allow to compose (reducing/transforming) functions, without having to rely on intermediate arrays. Consequently, we can separate filterN
into two different functions filter
and takeN
and thus increase their reusability. Unfortunately I haven't found a concise implementation of transducers that would be suitable for a comprehensible and executable example. I'll try to develop my own, simplified transducer solution and then give an appropriate example here.
Conclusion
As you can see, these implementations may not be as efficient as the imperative solution. Bergi has already pointed out that execution speed is not the most relevant concern of functional programming. If micro optimizations are important for you, you should continue to rely on imperative style.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With