Here's a simple function to compute Fibonacci numbers:
fib :: [Int]
fib = 1 : 1 : zipWith (+) fib (tail fib)
In ghci I can compute the series quickly. In fact, a bit of experimentation reveals that the computation runs in approximately linear time.
ghci> last $ take 100000 fib
354224848179261915075 -- takes under a second
If I change the type signature to be polymorphic instead:
fib :: Num a => [a]
fib = 1 : 1 : zipWith (+) fib (tail fib)
Then the algorithm becomes slower. In fact, it seems that it now runs in exponential time!
Does switching to a polymorphic type signature mean that the list being completely recomputed at each stage? If so, why?
This is because of the dictionary translation.
fib :: [Int]
is a top level constant.
But this:
fib :: Num a => [a]
fib = 1 : 1 : zipWith (+) fib (tail fib)
is just a top level function, which will be applied to a Num
dictionary at runtime. Like so:
fib d = 1 : 1 : zipWith (d.+) (fib d) (tail (fib d))
So if you compile this without any optimizations, such that no specialization can happen, you'll end up with exponential time fib, as the list is rebuilt from scratch, on each function call -- it isn't a lazy data structure anymore.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With