Given a function of type f :: a -> a
, we can produce a function that applies f
for n
times:
nTimes :: Int -> (a -> a) -> (a -> a)
nTimes 0 _ = id
nTimes 1 f = f
nTimes n f = f . nTimes (n-1) f
I can use exponentiating by squaring method here to implement another nTimes
function:
nTimes' :: Int -> (a -> a) -> (a -> a)
nTimes' = nTimes'' id
where
nTimes'' acc n f
| n == 0 = acc
| even n = nTimes'' acc (n `div` 2) (f . f)
| otherwise = nTimes'' (acc . f) (n-1) f
My question is:
nTimes
and nTimes'
always produce the same result?nTimes'
be faster?Although they are equivalent, I would be extremely surprised if ntimes'
were actually faster or memory-saving in any real situation. The problem is that unlike with the x * x
doubling in ordinary exponentiation by squaring, f . f
does not actually share any of the real work done when applying f
. It is still going to turn in the end into applying the outermost single f
to an argument constructed by all the remainder somehow. And ntimes (n-1) f x
is going to be about the most compact representation you can have of that remainder until it is itself actually needed to be evaluated, which will require applying its leftmost f
to a representation of ntimes (n-2) f x
, etc.
EDIT: Let me add that this could change significantly if you were doing memoization, i.e. replacing f . f
by memo (f . f)
for some memo-combinator that modifies a function to remember its results. In that case actual work could be shared, and this version of ntimes'
might sometimes be an improvement. Other times it could waste an awful lot of memory, though.
It will produce the same result in both cases, because both *
and .
are associative operators.
However, the "speedup" is not the speedup you might be thinking of. Exponentiation by squaring is good because it decreases the number of times the *
operator is applied, from linear to logarithmic number of times. In this case, you are decreasing the number of times the .
operator is applied, from linear to logarithmic number of times.
However, like Ørjan Johansen said, unlike *
, the .
operator doesn't really do much -- it just takes two function values, and outputs a new function value, which essentially wraps the given two functions plus some code.
The resulting function that you get from nTimes'
, when applied to a value, must still run f
n times. Therefore, there is no improvement in actually running the resulting function, only an improvement in the process of constructing the resulting function using .
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With