I've just been exploring Rank2Types and RankNTypes to try to get familiar with them. But I can't work out why the following does not work.
g :: (forall a. forall b. a -> b) -> x -> y -> (u,v)
g p x y = (p x, p y)
This definition is accepted by the compiler, but it fails when I try to use it:
ghci> g id 1 2
<interactive>:35:3:
Couldn't match type `a' with `b'
`a' is a rigid type variable bound by
a type expected by the context: a -> b at <interactive>:35:1
`b' is a rigid type variable bound by
a type expected by the context: a -> b at <interactive>:35:1
Expected type: a -> b
Actual type: a -> a
In the first argument of `g', namely `id'
In the expression: g id 1 2
I'm struggling to understand why a->a
is not acceptable type for the expected a->b
.
For all types a
and b
a function of type forall a. forall b. a -> b
must be able to take a value of type a
and produce a value of type b
. So it must, for example, be possible to put in an Int
and get out a String
.
You can not get a String
out of id
if you put in an Int
- you only ever get back the same type that you put in. So id
is not of type forall a. forall b. a -> b
. In fact there can be no total function of that type without typeclass constraints.
It turns out, you can sort-of-kind-of do something close(ish) to what you want using ConstraintKinds, but it's neither pretty to write nor pretty to use:
The idea is to parametrize g
with constraints which specify which conditions need to be met by x
, y
, u
and v
and what the relationships between x
and u
and between y
and v
need to be. Since we don't need all those constraints in all cases, we also introduce two dummy typeclasses (one for constraints on individual parameters and one for the "relationship constraints), so that we can use those as the constraints where no constraints are needed (GHC is not able to infer constraints if we don't specify them ourselves).
Some example constraints to make this more clear:
id
as the function, x
must be equal to u
and y
must be equal to v
. There are no constraints on x
, y
, u
or v
individually.show
, x
and y
must be instances of Show
and u
and v
must be equal to String
. There are no constraints on the relationship between x
and u
or y
and v
.read . show
, x
and y
need to be instances of Show
and u
and v
need to be instances of Read
. Again no constraints on the relationships between them.Convert a b where convert :: a -> b
and we pass in convert
, then we need Convert x u
and Convert y v
and no constraints on individual parameters.So here's the code to implement this:
{-# LANGUAGE Rank2Types, ConstraintKinds, FlexibleInstances, MultiParamTypeClasses #-}
class Dummy a
instance Dummy a
class Dummy2 a b
instance Dummy2 a b
g :: forall c. forall d. forall e. forall x. forall y. forall u. forall v.
(c x, c y, d u, d v, e x u, e y v) =>
(forall a. forall b. (c a, d b, e a b) => a -> b) -> x -> y -> (u,v)
g p x y = (p x, p y)
And here's how to use it:
Using show . read
to convert between different types of numbers:
> (g :: (Show x, Show y, Read u, Read v, Dummy2 x u, Dummy2 y v) => (forall a. forall b. (Show a, Read b, Dummy2 a b) => a -> b) -> x -> y -> (u,v)) (read . show) 1 2 :: (Double, Int)
(1.0,2)
Using id
:
> (g :: (Dummy x, Dummy y, x~u, y~v) => (forall a. forall b. (Dummy a, Dummy b, a~b) => a -> b) -> x -> y -> (u,v)) id 1 2.0
(1,2.0)
Using show
:
> (g :: (Show x, Show y, String~u, String~v, Dummy2 x u, Dummy2 x y) => (forall a. forall b. (Show a, String~b, Dummy2 a b) => a -> b) -> x -> y -> (u,v)) show 1 2.0
("1","2.0")
As you can see, this is horribly verbose and unreadable because you need to specify a signature for g
every time you use it. Without this, I don't think it's possible to get GHC to correctly infer the constraints (or at least I don't know how).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With