Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What does a nontrivial comonoid look like?

Comonoids are mentioned, for example, in Haskell's distributive library docs:

Due to the lack of non-trivial comonoids in Haskell, we can restrict ourselves to requiring a Functor rather than some Coapplicative class.

After a little searching I found a StackOverflow answer that explains this a bit more with the laws that comonoids would have to satisfy. So I think I understand why there's only one possible instance for a hypothetical Comonoid typeclass in Haskell.

Thus, to find a nontrivial comonoid, I suppose we'd have to look in some other category. Surely, if category theorists have a name for comonoids, then there are some interesting ones. The other answers on that page seem to hint at an example involving Supply, but I couldn't figure one out that still satisfies the laws.

I also turned to Wikipedia: there's a page for monoids that doesn't reference category theory, which seems to me as an adequate description of Haskell's Monoid typeclass, but "comonoid" redirects to a category-theoretic description of monoids and comonoids together that I can't understand, and there still don't seem to be any interesting examples.

So my questions are:

  1. Can comonoids be explained in non-category-theoretic terms like monoids?
  2. What is a simple example of an interesting comonoid, even if it's not a Haskell type? (Could one be found in a Kleisli category over a familiar Haskell monad?)

edit: I am not sure if this is actually category-theoretically correct, but what I was imagining in the parenthetical of question 2 was nontrivial definitions of delete :: a -> m () and split :: a -> m (a, a) for some specific Haskell type a and Haskell monad m that satisfy Kleisli-arrow versions of the comonoid laws in the linked answer. Other examples of comonoids are still welcome.

like image 905
betaveros Avatar asked May 25 '14 12:05

betaveros


4 Answers

As Phillip JF mentioned, comonoids are interesting to talk about in substructural logics. Let's talk about linear lambda calculus. This is much like your normal typed lambda calculus except that every variable must be used exactly once.

To get a feel, let's count linear functions of given types, i.e.

a -> a

has exactly one inhabitant, id. While

(a,a) -> (a,a)

has two, id and flip. Note that in regular lambda calculus (a,a) -> (a,a) has four inhabitants

(a, b) ↦ (a, a)
(a, b) ↦ (b, b)
(a, b) ↦ (a, b)
(a, b) ↦ (b, a)

but the first two require that we use one of the arguments twice while discarding the other. This is exactly the essence of linear lambda calculus—disallowing those kinds of functions.


As a quick aside, what's the point of linear LC? Well, we can use it to model linear effects or resource usage. If, for instance, we have a file type and a few transformers it might look like

data File
open  :: String -> File
close :: File   -> ()      -- consumes a file, but we're ignoring purity right now
t1    :: File -> File
t2    :: File -> File

and then the following are valid pipelines:

close . t1 . t2 . open
close . t2 . t1 . open
close . t1      . open
close . t2      . open

but this "branching" computation isn't

let f1 = open "foo"
    f2 = t1 f1
    f3 = t2 f1
in close f3

since we used f1 twice.


Now, you might be wondering something at this point about what things must follow the linear rules. For instance, I decided that some pipelines don't have to include both t1 and t2 (compare the enumeration exercise from before). Further, I introduced the open and close functions which happily create and destroy the File type despite that being a violation of linearity.

Indeed, we might posit the existence of functions which violate linearity—but not all clients may. It's much like the IO monad—all of the secrets live inside the implementation of IO so that users work in a "pure" world.

And this is where Comonoid comes in.

class Comonoid m where
  destroy :: m -> ()
  split   :: m -> (m, m)

A type that instantiates Comonoid in a linear lambda calculus is a type which has carry-along destruction and duplication rules. In other words, it's a type which isn't very much bound by linear lambda calculus at all.

Since Haskell doesn't implement the linear lambda calculus rules at all, we can always instantiate Comonoid

instance Comonoid a where
  destroy a = ()
  split a   = (a, a)

Or, perhaps the other way to think of it is that Haskell is a linear LC system that just happens to instantiate Comonoid for every type and applies destroy and split for you automatically.

like image 123
J. Abrahamson Avatar answered Nov 10 '22 08:11

J. Abrahamson


  1. A monoid in the usual sense is the same as a categorical monoid in the category of sets. One would expect that a comonoid in the usual sense is the same as a categorical comonoid in the category of sets. But every set in the category of sets is a comonoid in a trivial way, so apparently there is no non-categorical description of comonoids which would be parallel to that of monoids.
  2. Just like a monad is a monoid in the category of endofunctors (what's the problem?), a comonad is a comonoid in the category of endofunctors (what's the coproblem?) So yes, any comonad in Haskell would be an example of a comonoid.
like image 37
n. 1.8e9-where's-my-share m. Avatar answered Nov 10 '22 08:11

n. 1.8e9-where's-my-share m.


Well one way we can think of a monoid is as hooked to any particular product construction that we're using, so in Set we'd take this signature:

mul : A * A -> A
one : A

to this one:

dup : A -> A * A
one : A

but the idea of duality is that the logical statements that you can make all have duals which can be applied to the dual objects, and there is another way of stating what a monoid is, and that's being agnostic to the choice of product construction and then when we take the costructure we can take the coproduct in the output, like:

div : A -> A + A
one : A

where + is a tagged sum. Here we essentially have that every single term which is in this type is always ready to produce a new bit, which is implicitly derived from the tag used to denote the left or the right instance of A. I personally think this is really damn cool. I think the cool version of the things that people were talking about above is when you don't particularly construct that for monoids, but for monoid actions.

A monoid M is said to act on a set A if there's a function

act : M * A -> A

where we have the following rules

act identity a = a
act f (act g a) = act (f * g) a

If we want a co-action, what exactly do we want?

act : A -> M * A

this generates us a stream of the type of our comonoid! I'm having a lot of trouble coming up with the laws for these systems, but I think they must be around somewhere so I'm gonna keep looking tonight. If somebody can tell me them or that I'm wrong about these things in some way or another, also interested in that.

like image 30
Samuel Schlesinger Avatar answered Nov 10 '22 08:11

Samuel Schlesinger


As a physicist, the most common example I deal with is coalgebras, which are comonoid objects in the category of vector spaces, with the monoidal structure usually given by the tensor product.

In that case, there is a bijection between monoid and comonoid objects, since you can just take the adjoint or transpose of the product and unit maps to get a coproduct and a counit that satisfy the comonoid axioms.

In some branches of physics, it is very common to see objects that have both an algebra and a coalgebra structure with some compatibility axioms. The two most common cases are Hopf algebras and Frobenius algebras. They are very convenient for constructing states or solution that are entangled or correlated.


In programming, the simplest nontrivial example I can think of would be reference counted pointers such as shared_ptr in C++ and Rc in Rust, along with their weak equivalents. You can copy them, which is a nontrivial operation that bumps up the refcount (so the two copies are distinct from the initial state). You can drop (call the destructor) on one, which is nontrivial because it bumps down the refcount of any other refcounted pointer that points to the same piece of data.

Furthermore, weak pointers are a great example of a comonoid action. You can use the co-action to generate a weak pointer from a shared pointer. This can be easily checked by noting that creating one from a shared pointer and immediately dropping it is a unit operation, and creating one & cloning it is equivalent to creating two from the shared pointer.

This is a general thing you see with nontrivial coproducts and their co-actions: when they don't reduce to a copying operation, they intuitively imply some form of action at a distance between the two halves, while also adding an operation that erases one half to leave the other independent.

like image 38
saolof Avatar answered Nov 10 '22 10:11

saolof