I believe I understand fmap . fmap
for Functors, but on functions it's hurting my head for months now.
I've seen that you can just apply the definition of (.)
to (.) . (.)
, but I've forgot how to do that.
When I try it myself it always turns out wrong:
(.) f g = \x -> f (g x)
(.) (.) (.) = \x -> (.) ((.) x)
\x f -> (.) ((.) x) f
\x f y -> (((.)(f y)) x)
\x f y g-> (((.)(f y) g) x)
\x f y g-> ((f (g y)) x)
\x f y g-> ((f (g y)) x):: t2 -> (t1 -> t2 -> t) -> t3 -> (t3 -> t1) -> t
If "just applying the definition" is the only way of doing it, how did anybody come up with (.) . (.)
?
There must be some deeper understanding or intuition I'm missing.
Understanding is a psychological process related to an abstract or physical object, such as a person, situation, or message whereby one is able to use concepts to model that object. Understanding is a relation between the knower and an object of understanding.
Break down what you don't understand into smaller components. It's always easier to learn or understand simple ideas and concepts than it is to understand grand theories and schemes. Break down whatever you're having trouble understanding into its smaller components and work on understanding one component at a time.
If you need to better understand something, think about the problem and pinpoint exactly what it is you don’t understand. If you know someone who is knowledgeable about the subject, ask them questions that will help you get clarity. You can also read books or look online to learn more about the subject.
10 great ways to say “I understand”. 1. To catch on. 2. To catch one’s drift. 3. To grasp something. 4. To get something. 5. To get the idea.
How to Understand What You Read. 1. Eliminate distractions. Get off the computer, turn off the TV, and cut out the music. It's very difficult to read, especially if you're reading ... 2. Skim first and then read closely. If you're reading something difficult, don't worry too much about spoiling the ...
Here are 10 new ways for you to let people know that you understand what they mean. They are all commonly used phrases which can be used more or less interchangeably and will help create the impression that you command native expressions. 1. To catch on 2. To catch one’s drift
Coming up with (.) . (.)
is actually pretty straightforward, it's the intuition behind what it does that is quite tricky to understand.
(.)
gets you very far when rewriting expression into the "pipe" style computations (think of |
in shell). However, it becomes awkward to use once you try to compose a function that takes multiple arguments with a function that only takes one. As an example, let's have a definition of concatMap
:
concatMap :: (a -> [b]) -> [a] -> [b]
concatMap f xs = concat (map f xs)
Getting rid of xs
is just a standard operation:
concatMap f = concat . map f
However, there's no "nice" way of getting rid of f
. This is caused by the fact, that map
takes two arguments and we'd like to apply concat
on its final result.
You can of course apply some pointfree tricks and get away with just (.)
:
concatMap f = (.) concat (map f)
concatMap f = (.) concat . map $ f
concatMap = (.) concat . map
concatMap = (concat .) . map
But alas, readability of this code is mostly gone. Instead, we introduce a new combinator, that does exactly what we need: apply the second function to the final result of first one.
-- .: is fairly standard name for this combinator
(.:) :: (c -> d) -> (a -> b -> c) -> a -> b -> d
(f .: g) x y = f (g x y)
concatMap = concat .: map
Fine, that's it for motivation. Let's get to the pointfree business.
(.:) = \f g x y -> f (g x y)
= \f g x y -> f ((g x) y)
= \f g x y -> f . g x $ y
= \f g x -> f . g x
Now, here comes the interesting part. This is yet another of the pointfree tricks that usually helps when you get stuck: we rewrite .
into its prefix form and try to continue from there.
= \f g x -> (.) f (g x)
= \f g x -> (.) f . g $ x
= \f g -> (.) f . g
= \f g -> (.) ((.) f) g
= \f -> (.) ((.) f)
= \f -> (.) . (.) $ f
= (.) . (.)
As for intuition, there's this very nice article that you should read. I'll paraphrase the part about (.)
:
Let's think again about what our combinator should do: it should apply f
to the result of result of g
(I've been using final result in the part before on purpose, it's really what you get when you fully apply - modulo unifying type variables with another function type - the g
function, result here is just application g x
for some x
).
What it means for us to apply f
to the result of g
? Well, once we apply g
to some value, we'll take the result and apply f
to it. Sounds familiar: that's what (.)
does.
result :: (b -> c) -> ((a -> b) -> (a -> c))
result = (.)
Now, it turns out that composition (our of word) of those combinators is just a function composition, that is:
(.:) = result . result -- the result of result
You can also use your understanding of fmap . fmap
.
If you have two Functor
s foo
and bar
, then
fmap . fmap :: (a -> b) -> foo (bar a) -> foo (bar b)
fmap . fmap
takes a function and produces an induced function for the composition of the two Functor
s.
Now, for any type t
, (->) t
is a Functor
, and the fmap
for that Functor
is (.)
.
So (.) . (.)
is fmap . fmap
for the case where the two Functor
s are (->) s
and (->) t
, and thus
(.) . (.) :: (a -> b) -> ((->) s) ((->) t a) -> ((->) s) ((->) t b)
= (a -> b) -> (s -> (t -> a)) -> (s -> (t -> b))
= (a -> b) -> (s -> t -> a ) -> (s -> t -> b )
it "composes" a function f :: a -> b
with a function of two arguments, g :: s -> t -> a
,
((.) . (.)) f g = \x y -> f (g x y)
That view also makes it clear that, and how, the pattern extends to functions taking more arguments,
(.) :: (a -> b) -> (s -> a) -> (s -> b)
(.) . (.) :: (a -> b) -> (s -> t -> a) -> (s -> t -> b)
(.) . (.) . (.) :: (a -> b) -> (s -> t -> u -> a) -> (s -> t -> u -> b)
etc.
Your solution diverges when you introduce y
. It should be
\x f y -> ((.) ((.) x) f) y :: (c -> d) -> (a -> b -> c) -> a -> b -> d
\x f y z -> ((.) ((.) x) f) y z :: (c -> d) -> (a -> b -> c) -> a -> b -> d
\x f y z -> ((.) x (f y)) z :: (c -> d) -> (a -> b -> c) -> a -> b -> d
-- Or alternately:
\x f y z -> (x . f y) z :: (c -> d) -> (a -> b -> c) -> a -> b -> d
\x f y z -> (x (f y z)) :: (c -> d) -> (a -> b -> c) -> a -> b -> d
Which matches the original type signature: (.) . (.) :: (c -> d) -> (a -> b -> c) -> a -> b -> d
(It's easiest to do the expansion in ghci, where you can check each step with :t expression
)
Edit:
The deeper intution is this:
(.)
is simply defined as
\f g -> \x -> f (g x)
Which we can simplify to
\f g x -> f (g x)
So when you supply it two arguments, it's curried and still needs another argument to resolve.
Each time you use (.)
with 2 arguments, you create a "need" for one more argument.
(.) . (.)
is of course just (.) (.) (.)
, so let's expand it:
(\f0 g0 x0 -> f0 (g0 x0)) (\f1 g1 x1 -> f1 (g1 x1)) (\f2 g2 x2 -> f2 (g2 x2))
We can beta-reduce on f0
and g0
(but we don't have an x0
!):
\x0 -> (\f1 g1 x1 -> f1 (g1 x1)) ((\f2 g2 x2 -> f2 (g2 x2)) x0)
Substitute the 2nd expression for f1
...
\x0 -> \g1 x1 -> ((\f2 g2 x2 -> f2 (g2 x2)) x0) (g1 x1)
Now it "flips back"! (beta-reduction on f2
):
This is the interesting step - x0
is substituted for f2
-- This means that x
, which could have been data, is instead a function.
This is what (.) . (.)
provides -- the "need" for the extra argument.
\x0 -> \g1 x1 -> (\g2 x2 -> x0 (g2 x2)) (g1 x1)
This is starting to look normal...
Let's beta-reduce a last time (on g2
):
\x0 -> \g1 x1 -> (\x2 -> x0 ((g1 x1) x2))
So we're left with simply
\x0 g1 x1 x2 -> x0 ((g1 x1) x2)
, where the arguments are nicely still in order.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With