Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Reasonable Comonad implementations

We can describe monad, as the computational context, and monad implementation exactly preserves the meaning of that context. For example Option - the context meaning is that the value might exist. Given the Option datatype, the only implementation which makes sense is pure = some, flatMap f = {none => none; some x => f x } As my understanding of monads, by following the type signatures - there is only one reasonable implementation for any monad. In other words, if you want to add some meaningful context to the value/computation, there is only one way to do it for any particular monad.
On the other hand, when it comes to comonad, it suddenly starts feeling completely strange, like there're many many ways to implement a comonad for a given type, and you might even give a certain meaning for every implementation.
Consider, NEL, with copure = head. cojoin is implemented via tails, which perfectly satisfies the types. If we implement cojoin by permutations or as fa map (_ => fa) map f it will not satisfy comonad laws.
But circling implementation is valid:

override def cobind[A, B](fa: NonEmptyList[A])(f: (NonEmptyList[A]) => B): NonEmptyList[B] = {
  val n: NonEmptyList[NonEmptyList[A]] = fa.map(_ => fa).zipWithIndex.map { case (li , i ) =>
    val(h: List[A], t: List[A]) = li.list.splitAt(i)
    val ll: List[A] = t ++ h
    NonEmptyList.nel(ll.head, ll.tail)
  }
  n map f
}

The reason of such ambiguity for Command, even with having the laws restricting us, as I see it, that if in Monad we restrict ourselves in some context (we kind of can't “create" new information), in Comonad, we are extending that context further( there are quite a ways to make a List of Lists from List), which gives us a way more possibilities of doing it. In my head metaphor is: for Monad, we are standing on the road and want reach some destination point A = hence there’re only meaningful shortest way to chose. In command, we are standing in A, and wanna go somewhere from it, so there’re way more ways of doing it.
So my question is - am I actually right? Can we implement command in a different ways, every time making another meaningful abstraction? Or only tails implemantation is reasonable beacause of the abstraction comonad supposes to bring in.

like image 610
I See Voices Avatar asked Mar 04 '16 09:03

I See Voices


2 Answers

Nonempty lists arise as two distinct comonads by two standard constructions.

Firstly, the cofree comonad is given thus.

data Cofree f x = x :& f (Cofree f x)  -- every node is labelled with an x

instance Functor f => Functor (Cofree f) where
  fmap f (x :& fcx) = f x :& fmap (fmap f) fcx

instance Functor f => Comonad (Cofree f) where
  extract (x :& _) = x   -- get the label of the top node
  duplicate cx@(_ :& fcx) = cx :& fmap duplicate fcx

Nonempty lists can be given as

type Nellist1 = Cofree Maybe

and are thus automatically comonadic. That gives you the "tails" comonad.

Meanwhile, the decomposition of a structure as an "element zipper" induces comonadic structure. As I explained at great length,

Differentiability amounts to this bunch of operations on zippers (individual elements picked out of their context and put "in focus")

class (Functor f, Functor (DF f)) => Diff1 f where
  type DF f :: * -> *
  upF      ::  ZF f x  ->  f x           -- defocus
  downF    ::  f x     ->  f (ZF f x)    -- find all ways to focus
  aroundF  ::  ZF f x  ->  ZF f (ZF f x) -- find all ways to *re*focus

data ZF f x = (:<-:) {cxF :: DF f x, elF :: x}

so we get a functor and a comonad

instance Diff1 f => Functor (ZF f) where
  fmap f (df :<-: x) = fmap f df :<-: f x

instance Diff1 f => Comonad (ZF f) where
  extract    = elF
  duplicate  = aroundF

In principle, nonempty lists arise by this construction, too. The trouble is that the functor being differentiated is not so easy to express in Haskell, even though the derivative is sensible. Let's go nuts...

Nonempty lists amount to ZF thingy x where DF thingy = []. Can we integrate lists? Fooling about algebraically might give us a clue

[x] = Either () (x, [x]) = 1 + x * [x]

so as a power series, we get

[x] = Sum(n :: Nat). x^n

and we can integrate power series

Integral [x] dx = Sum(n :: Nat). x^(n+1)/(n+1)

which means we get some sort of arbitrary tuples of size (n+1), but we have to identify them up to some relation where the equivalence classes have size (n+1). One way to do that is to identify tuples up to rotation, so you don't know which of the (n+1) positions is "first".

That is, lists are the derivative of nonempty cycles. Think about a bunch of people at a round table playing cards (possibly solitaire). Rotate the table and you get the same bunch of people playing cards. But once you designate the dealer, you fix the list of other players in order, clockwise starting left of the dealer.

Two standard constructions; two comonads for the same functor.

(In my comment earlier, I remarked about the possibility of multiple monads. It's a bit involved, but here's a starting point. Every monad m is also applicative, and the applicative laws make m () a monoid. Correspondingly, every monoid structure for m () at least gives a candidate for a monad structure on m. In the case of writer monads (,) s, we get that the candidates for monads are the monoids on (s,()) which are just the same as the monoids on s.)

Edit Nonempty lists are also monadic in at least two distinct ways.

I define the identity and pairing for functors, as follows.

newtype I         x = I x
data    (f :*: g) x = (:&:) {lll :: f x, rrr :: g x}

Now, I can introduce nonempty lists as follows, then define concatenation.

newtype Ne x = Ne ((I :*: []) x)

cat :: Ne x -> Ne x -> Ne x
cat (Ne (I x :&: xs)) (Ne (I y :&: ys)) = Ne (I x :&: (xs ++ y : ys))

These are monadic just the way possibly empty lists are:

instance Monad Ne where
  return x = Ne (I x :&: [])
  Ne (I x :&: xs) >>= k = foldl cat (k x) (map k xs)

However, I is a monad:

instance Monad I where
  return = I
  I a >>= k = k a

Moreover, monads are closed under pairing:

instance (Monad f, Monad g) => Monad (f :*: g) where
  return x = return x :&: return x
  (fa :&: ga) >>= k = (fa >>= (lll . k)) :&: (ga >>= (rrr . k))

So we could just have written

newtype Ne x = Ne ((I :*: []) x) deriving (Monad, Applicative, Functor)

but the return for that monad gives us double vision.

return x = Ne (I x :&: [x])

So there you are: nonempty lists are comonadic two ways, monadic two ways, applicative six ways,...

(Lots more to say about this, but I have to stop somewhere.)

like image 106
pigworker Avatar answered Sep 20 '22 23:09

pigworker


Here is the same counterexample, showing that a Monad has multiple possible instances, from pigworker's comment, but more worked through (though not typechecked, so pardon any errors).

data WithBool a = WB Bool a deriving Functor

instance Monad WithBool where
     return = WB z
     (WithBool b a) >>= f =
           case f a of (WithBool b2 r) -> WithBool (b `op` b2) r

-- This holds if op = (&&) and z = True
-- This also holds if op = (||) and z = False
-- It should also work if op = const or `flip const` and z = True _or_ False

As Bakuriu says, the choice of a "default" implementation is thus somewhat arbitrary, and conditioned by what people are expected to want.

like image 22
sclv Avatar answered Sep 21 '22 23:09

sclv