Given an operation (??)
such that
(a ?? b) ?? c = a ?? (b ?? c)
(that is to say (??)
is associative)
must it be the case that
liftA2 (??) (liftA2 (??) a b) c = liftA2 (??) a (liftA2 (??) b c)
(that is to say that liftA2 (??)
is associative)
If we would prefere we can rewrite this as:
fmap (??) (fmap (??) a <*> b) <*> c = fmap (??) a <*> (fmap (??) b <*> c)
I spent a little while staring at the applicative laws but I couldn't come up with a proof that this would be the case. So I set out to disprove it. All the out-of-the-box applicatives (Maybe
, []
, Either
, etc.) that I have tried, follow the law, so I thought I would create my own.
My best idea was to make a vacuous applicative with an extra piece of information attached.
data Vacuous a = Vac Alg
Where Alg
would be some algebra I would define at my own convenience later as to make the property fail but the applicative laws succeed.
Now we define our instances as such:
instance Functor Vacuous where
fmap f = id
instance Applicative Vacuous where
pure x = Vac i
liftA2 f (Vac a) (Vac b) = Vac (comb a b)
(Vac a) <*> (Vac b) = Vac (comb a b)
Where i
is some element of Alg
to be determined and comb
is a binary combinator on Alg
also to be determined. There is not really another way we can go about defining this.
If we want to fulfill the Identiy law this forces i
to be an idenity over comb
. We then get Homomorphism and Interchange for free. But now Composition forces comb
to be associative over Alg
((pure (.) <*> Vac u) <*> Vac v) <*> Vac w = Vac u <*> (Vac v <*> Vac w)
((Vac i <*> Vac u) <*> Vac v) <*> Vac w = Vac u <*> (Vac v <*> Vac w)
(Vac u <*> Vac v) <*> Vac w = Vac u <*> (Vac v <*> Vac w)
(Vac (comb u v)) <*> Vac w = Vac u <*> (Vac (comb v w))
Vac (comb (comb u v) w) = Vac (comb u (comb v w))
comb (comb u v) w = comb u (comb v w)
Forcing us to satisfy the property.
Is there a counter example? If not how can we prove this property?
We start by rewriting the left hand side, using the applicative laws. Recall that both <$>
and <*>
are left-associative, so that we have, e.g., x <*> y <*> z = (x <*> y) <*> z
and x <$> y <*> z = (x <$> y) <*> z
.
(??) <$> ((??) <$> a <*> b) <*> c
= fmap/pure law
pure (??) <*> (pure (??) <*> a <*> b) <*> c
= composition law
pure (.) <*> pure (??) <*> (pure (??) <*> a) <*> b <*> c
= homomorphism law
pure ((.) (??)) <*> (pure (??) <*> a) <*> b <*> c
= composition law
pure (.) <*> pure ((.) (??)) <*> pure (??) <*> a <*> b <*> c
= homomorphism law
pure ((.) ((.) (??)) (??)) <*> a <*> b <*> c
= definition (.)
pure (\x -> (.) (??) ((??) x)) <*> a <*> b <*> c
= definition (.), eta expansion
pure (\x y z -> (??) ((??) x y) z) <*> a <*> b <*> c
= associativity (??)
pure (\x y z -> x ?? y ?? z) <*> a <*> b <*> c
The last form reveals that, essentially, the original expression "runs" the actions a
, b
, and c
in that order, sequencing their effects in that way, and then uses (??)
to purely combine the three results.
We can then prove that the right hand side is equivalent to the above form.
(??) <$> a <*> ((??) <$> b <*> c)
= fmap/pure law
pure (??) <*> a <*> (pure (??) <*> b <*> c)
= composition law
pure (.) <*> (pure (??) <*> a) <*> (pure (??) <*> b) <*> c
= composition law
pure (.) <*> pure (.) <*> pure (??) <*> a <*> (pure (??) <*> b) <*> c
= homomorphism law
pure ((.) (.) (??)) <*> a <*> (pure (??) <*> b) <*> c
= composition law
pure (.) <*> (pure ((.) (.) (??)) <*> a) <*> pure (??) <*> b <*> c
= composition law
pure (.) <*> pure (.) <*> pure ((.) (.) (??)) <*> a <*> pure (??) <*> b <*> c
= homomorphism law
pure ((.) (.) ((.) (.) (??))) <*> a <*> pure (??) <*> b <*> c
= interchange law
pure ($ (??)) <*> (pure ((.) (.) ((.) (.) (??))) <*> a) <*> b <*> c
= composition law
pure (.) <*> pure ($ (??)) <*> pure ((.) (.) ((.) (.) (??))) <*> a <*> b <*> c
= homomorphism law
pure ((.) ($ (??)) ((.) (.) ((.) (.) (??)))) <*> a <*> b <*> c
Now, we only have to rewrite the point-free term ((.) ($ (??)) ((.) (.) ((.) (.) (??))))
in a more readable point-ful form, so that we can make it equal to the term we got in the first half of the proof. This is just a matter of applying (.)
and ($)
as needed.
((.) ($ (??)) ((.) (.) ((.) (.) (??))))
= \x -> (.) ($ (??)) ((.) (.) ((.) (.) (??))) x
= \x -> ($ (??)) ((.) (.) ((.) (.) (??)) x)
= \x -> (.) (.) ((.) (.) (??)) x (??)
= \x y -> (.) ((.) (.) (??) x) (??) y
= \x y -> (.) (.) (??) x ((??) y)
= \x y z -> (.) ((??) x) ((??) y) z
= \x y z -> (??) x ((??) y z)
= \x y z -> x ?? y ?? z
where in the last step we exploited the associativity of (??)
.
(Whew.)
Not only does it preserve associativity, I would say that's perhaps the main idea behind the applicative laws in the first place!
Recall the maths-style form of the class:
class Functor f => Monoidal f where
funit :: () -> f ()
fzip :: (f a, f b) -> f (a,b)
with laws
zAssc: fzip (fzip (x,y), z) ≅ fzip (x, fzip (y,z)) -- modulo tuple re-bracketing
fComm: fzip (fmap fx x, fmap fy y) ≡ fmap (fx***fy) (fzip (x,y))
fIdnt: fmap id ≡ id -- ─╮
fCmpo: fmap f . fmap g ≡ fmap (f . g) -- ─┴ functor laws
In this approach, liftA2
factors into fmapping a tuple-valued function over an already ready-zipped pair:
liftZ2 :: ((a,b)->c) -> (f a,f b) -> f c
liftZ2 f = fmap f . fzip
i.e.
liftZ2 f (a,b) = f <$> fzip (a,b)
Now say we have given
g :: (G,G) -> G
gAssc: g (g (α,β), γ) ≡ g (α, g (β,γ))
or point-free (again ignoring tuple-bracket interchange)
gAssc: g . (g***id) ≅ g . (id***g)
If we write everything in this style, it's easy to see that associativity-preservation is basically just zAssc
, with everything about g
happening in a separate fmap
step:
liftZ2 g (liftZ2 g (a,b), c)
{-liftA2'-} ≡ g <$> fzip (g <$> fzip (a,b), c)
{-fIdnt,fComm-} ≡ g . (g***id) <$> fzip (fzip (a,b), c)
{-gAssc,zAssc-} ≡ g . (id***g) <$> fzip (a, fzip (b,c))
{-fComm,fIdnt-} ≡ g <$> fzip (a, g <$> fzip (b,c))
{-liftA2'-} ≡ liftZ2 g (a, liftZ2 g (b,c))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With