After writing this article I decided to put my money where my mouth is and started to convert a previous project of mine to use recursion-schemes
.
The data structure in question is a lazy kdtree. Please have a look at the implementations with explicit and implicit recursion.
This is mostly a straightforward conversion along the lines of:
data KDTree v a = Node a (Node v a) (Node v a) | Leaf v a
to
data KDTreeF v a f = NodeF a f f | Leaf v a
Now after benchmarking the whole shebang I find that the KDTreeF
version is about two times slower than the normal version (find the whole run here).
Is it just the additional Fix
wrapper that slows me down here? And is there anything I could do against this?
cata (fmap foo algebra)
several times. Is this good practice?recursion-schemes
package.Is this related? https://ghc.haskell.org/trac/ghc/wiki/NewtypeWrappers
Is newtype Fix f = Fix (f (Fix f))
not "free"?
Just did another bunch of benchmarks. This time I tested tree construction and deconstruction. Benchmark here: https://dl.dropboxusercontent.com/u/2359191/2014-05-15-kdtree-bench-03.html
While the Core output indicates that intermediate data structures are not removed completely and it is not surprising that the linear searches dominate now, the KDTreeF
s now are slightly faster than the KDTree
s. Doesn't matter much though.
I have just implemented the Thing + ThingF + Base instance
variant of the tree. And guess what ... this one is amazingly fast.
I was under the impression that this one would be the slowest of all variants. I really should have read my own post ... the line where I write:
there is no trace of the TreeF structure to be found
Let the numbers speak for themselves, kdtreeu
is the new variant. The results are not always as clear as for these cases, but in most cases they are at least as fast as the explicit recursion (kdtree
in the benchmark).
I wasn't using recursion schemes, but rather my own "hand-rolled" cata, ana, Fix/unFix to do generation of (lists of) and evaluation of programs in a small language in the hope of finding one that matched a list of (input, output) pairs.
In my experience, cata optimized better than direct recursion and gave a speed boost. Also IME, ana prevented stack overflow errors that my naive generator was causing, but that make have centered around generation of the final list.
So, my answer would be that no, they aren't always slower, but I don't see any obvious problems; so they may simply be slower in your case. It's also possible that recursion-schemes itself is just not optimized for speed.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With