I see this as the general form of a factorial function in Haskell:
factorial :: (Integral a) => a -> a
factorial n = product [1..n]
I understand this is the most elegant way, but when I write my own recursive function to do it, it's significantly slower:
factorial :: (Integral a) => a -> a
factorial 1 = 1
factorial n = n * factorial (n - 1)
Doesn't the first solution have to do pretty much everything that the first one does, internally? How is it so much faster? Is it possible to write something as fast as the first solution without using the fancy list notation or the product function?
The first version is easier for GHC to optimize than the second one. In particular, product uses foldl
:
product = foldl (*) 1
and when applied to [1..n]
(which is just 1 `enumFromTo` n
) it is subject to fusion. In short, GHC has carefully crafted re-write rules that are meant to optimize away intermediate data structures from pieces of code where the lists created are immediately consumed (in the case of factorial
, foldl (*) 1
is the consumer and 1 `enumFromTo` n
the producer).
Note that you can do what GHC does (factorial = foldl (*) 1 . enumFromTo 1
) and get the same performance.
Also, your second function isn't even tail recursive. That part you could fix pretty easily by passing in an accumulator:
factorial :: (Integral a) => a -> a
factorial n = go n 1
where
go 0 m = m
go n m = go (n-1) (n*m)
Hand in hand with this, is the fact that for most numeric types you will want the arithmetic to be strict. That boils down to adding BangPatterns
to n
and m
.
Maybe something like that:
f n = foldl (*) 1 [1..n]
You can change foldl on foldr or foldl' it will change speed
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With