(As an excuse: the title mimics the title of Why do we need monads?)
There are containers (and indexed ones) (and hasochistic ones) and descriptions. But containers are problematic and to my very small experience it's harder to think in terms of containers than in terms of descriptions. The type of non-indexed containers is isomorphic to Σ
— that's quite too unspecific. The shapes-and-positions description helps, but in
⟦_⟧ᶜ : ∀ {α β γ} -> Container α β -> Set γ -> Set (α ⊔ β ⊔ γ) ⟦ Sh ◃ Pos ⟧ᶜ A = ∃ λ sh -> Pos sh -> A Kᶜ : ∀ {α β} -> Set α -> Container α β Kᶜ A = A ◃ const (Lift ⊥)
we are essentially using Σ
rather than shapes and positions.
The type of strictly-positive free monads over containers has a rather straightforward definition, but the type of Freer
monads looks simpler to me (and in a sense Freer
monads are even better than usual Free
monads as described in the paper).
So what can we do with containers in a nicer way than with descriptions or something else?
Benefits of containersContainers require less system resources than traditional or hardware virtual machine environments because they don't include operating system images. Applications running in containers can be deployed easily to multiple different operating systems and hardware platforms.
Docker streamlines the development lifecycle by allowing developers to work in standardized environments using local containers which provide your applications and services. Containers are great for continuous integration and continuous delivery (CI/CD) workflows.
The biggest advantage of using Containerization is that the applications are platform-independent. A container will already contain everything that the application needs. It will come with various configuration dependencies and files. This will allow you to run your application on any computer you want.
To my mind, the value of containers (as in container theory) is their uniformity. That uniformity gives considerable scope to use container representations as the basis for executable specifications, and perhaps even machine-assisted program derivation.
I would not recommend fixpoints of (normalized) containers as a good general purpose way to implement recursive data structures. That is, it is helpful to know that a given functor has (up to iso) a presentation as a container, because it tells you that generic container functionality (which is easily implemented, once for all, thanks to the uniformity) can be instantiated to your particular functor, and what behaviour you should expect. But that's not to say that a container implementation will be efficient in any practical way. Indeed, I generally prefer first-order encodings (tags and tuples, rather than functions) of first-order data.
To fix terminology, let us say that the type Cont
of containers (on Set
, but other categories are available) is given by a constructor <|
packing two fields, shapes and positions
S : Set P : S -> Set
(This is the same signature of data which is used to determine a Sigma type, or a Pi type, or a W type, but that does not mean that containers are the same as any of these things, or that these things are the same as each other.)
The interpretation of such a thing as a functor is given by
[_]C : Cont -> Set -> Set [ S <| P ]C X = Sg S \ s -> P s -> X -- I'd prefer (s : S) * (P s -> X) mapC : (C : Cont){X Y : Set} -> (X -> Y) -> [ C ]C X -> [ C ]C Y mapC (S <| P) f (s , k) = (s , f o k) -- o is composition
And we're already winning. That's map
implemented once for all. What's more, the functor laws hold by computation alone. There is no need for recursion on the structure of types to construct the operation, or to prove the laws.
Nobody is attempting to claim that, operationally, Nat <| Fin
gives an efficient implementation of lists, just that by making that identification we learn something useful about the structure of lists.
Let me say something about descriptions. For the benefit of lazy readers, let me reconstruct them.
data Desc : Set1 where var : Desc sg pi : (A : Set)(D : A -> Desc) -> Desc one : Desc -- could be Pi with A = Zero _*_ : Desc -> Desc -> Desc -- could be Pi with A = Bool con : Set -> Desc -- constant descriptions as singleton tuples con A = sg A \ _ -> one _+_ : Desc -> Desc -> Desc -- disjoint sums by pairing with a tag S + T = sg Two \ { true -> S ; false -> T }
Values in Desc
describe functors whose fixpoints give datatypes. Which functors do they describe?
[_]D : Desc -> Set -> Set [ var ]D X = X [ sg A D ]D X = Sg A \ a -> [ D a ]D X [ pi A D ]D X = (a : A) -> [ D a ]D X [ one ]D X = One [ D * D' ]D X = Sg ([ D ]D X) \ _ -> [ D' ]D X mapD : (D : Desc){X Y : Set} -> (X -> Y) -> [ D ]D X -> [ D ]D Y mapD var f x = f x mapD (sg A D) f (a , d) = (a , mapD (D a) f d) mapD (pi A D) f g = \ a -> mapD (D a) f (g a) mapD one f <> = <> mapD (D * D') f (d , d') = (mapD D f d , mapD D' f d')
We inevitably have to work by recursion over descriptions, so it's harder work. The functor laws, too, do not come for free. We get a better representation of the data, operationally, because we don't need to resort to functional encodings when concrete tuples will do. But we have to work harder to write programs.
Note that every container has a description:
sg S \ s -> pi (P s) \ _ -> var
But it's also true that every description has a presentation as an isomorphic container.
ShD : Desc -> Set ShD D = [ D ]D One PosD : (D : Desc) -> ShD D -> Set PosD var <> = One PosD (sg A D) (a , d) = PosD (D a) d PosD (pi A D) f = Sg A \ a -> PosD (D a) (f a) PosD one <> = Zero PosD (D * D') (d , d') = PosD D d + PosD D' d' ContD : Desc -> Cont ContD D = ShD D <| PosD D
That's to say, containers are a normal form for descriptions. It's an exercise to show that [ D ]D X
is naturally isomorphic to [ ContD D ]C X
. That makes life easier, because to say what to do for descriptions, it's enough in principle to say what to do for their normal forms, containers. The above mapD
operation could, in principle, be obtained by fusing the isomorphisms to the uniform definition of mapC
.
Similarly, if we have a notion of equality, we can say what one-hole contexts are for containers uniformly
_-[_] : (X : Set) -> X -> Set X -[ x ] = Sg X \ x' -> (x == x') -> Zero dC : Cont -> Cont dC (S <| P) = (Sg S P) <| (\ { (s , p) -> P s -[ p ] })
That is, the shape of a one-hole context in a container is the pair of the shape of the original container and the position of the hole; the positions are the original positions apart from that of the hole. That's the proof-relevant version of "multiply by the index, decrement the index" when differentiating power series.
This uniform treatment gives us the specification from which we can derive the centuries-old program to compute the derivative of a polynomial.
dD : Desc -> Desc dD var = one dD (sg A D) = sg A \ a -> dD (D a) dD (pi A D) = sg A \ a -> (pi (A -[ a ]) \ { (a' , _) -> D a' }) * dD (D a) dD one = con Zero dD (D * D') = (dD D * D') + (D * dD D')
How can I check that my derivative operator for descriptions is correct? By checking it against the derivative of containers!
Don't fall into the trap of thinking that just because a presentation of some idea is not operationally helpful that it cannot be conceptually helpful.
One last thing. The Freer
trick amounts to rearranging an arbitrary functor in a particular way (switching to Haskell)
data Obfuncscate f x where (:<) :: forall p. f p -> (p -> x) -> Obfuncscate f x
but this is not an alternative to containers. This is a slight currying of a container presentation. If we had strong existentials and dependent types, we could write
data Obfuncscate f x where (:<) :: pi (s :: exists p. f p) -> (fst s -> x) -> Obfuncscate f x
so that (exists p. f p)
represents shapes (where you can choose your representation of positions, then mark each place with its position), and fst
picks out the existential witness from a shape (the position representation you chose). It has the merit of being obviously strictly positive exactly because it's a container presentation.
In Haskell, of course, you have to curry out the existential, which fortunately leaves a dependency only on the type projection. It's the weakness of the existential which justifies the equivalence of Obfuncscate f
and f
. If you try the same trick in a dependent type theory with strong existentials, the encoding loses its uniqueness because you can project and tell apart different choices of representation for positions. That is, I can represent Just 3
by
Just () :< const 3
or by
Just True :< \ b -> if b then 3 else 5
and in Coq, say, these are provably distinct.
Every polymorphic function between container types is given in a particular way. There's that uniformity working to clarify our understanding again.
If you have some
f : {X : Set} -> [ S <| T ]C X -> [ S' <| T' ]C X
it is (extensionally) given by the following data, which make no mention of elements whatsoever:
toS : S -> S' fromP : (s : S) -> P' (toS s) -> P s f (s , k) = (toS s , k o fromP s)
That is, the only way to define a polymorphic function between containers is to say how to translate input shapes to output shapes, then say how to fill output positions from input positions.
For your preferred representation of strictly positive functors, give a similarly tight characterisation of the polymorphic functions which eliminates abstraction over the element type. (For descriptions, I would use exactly their reducability to containers.)
Given two functors, f
and g
, it is easy to say what their composition f o g
is: (f o g) x
wraps up things in f (g x)
, giving us "f
-structures of g
-structures". But can you readily impose the extra condition that all of the g
structures stored in the f
structure have the same shape?
Let's say that f >< g
captures the transposable fragment of f o g
, where all the g
shapes are the same, so that we can just as well turn the thing into a g
-structure of f
-structures. E.g., while [] o []
gives ragged lists of lists, [] >< []
gives rectangular matrices; [] >< Maybe
gives lists which are either all Nothing
or all Just
.
Give ><
for your preferred representation of strictly positive functors. For containers, it's this easy.
(S <| P) >< (S' <| P') = (S * S') <| \ { (s , s') -> P s * P' s' }
Containers, in their normalized Sigma-then-Pi form, are not intended to be an efficient machine representation of data. But the knowledge that a given functor, implemented however, has a presentation as a container helps us understand its structure and give it useful equipment. Many useful constructions can be given abstractly for containers, once for all, when they must be given case-by-case for other presentations. So, yes, it is a good idea to learn about containers, if only to grasp the rationale behind the more specific constructions you actually implement.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With