Consider the following example functions which both add a random value to the pure input:
addRand1 :: (MonadRandom m) => m (Int -> Int)
addRand2 :: (MonadRandom m) => Int -> m Int -- *can* be written as m (Int -> Int)
It is easy to convert addRand1
into a function with the same signature as addRand2
, but not vice versa.
To me, this provides strong evidence that I should write addRand1
over addRand2
. In this example, addRand1
has the more truthful/general type, which typically captures important abstractions in Haskell.
While having the "right" signature seems a crucial aspect of functional programming, I also have a lot of practical reasons why addRand2
might be a better signature, even if it could be written with addRand1
s signature.
With interfaces:
class FakeMonadRandom m where
getRandom :: (Random a, Num a) => m a
getRandomR1 :: (Random a, Num a) => (a,a) -> m a
getRandomR2 :: (Random a, Num a) => m ((a,a) -> a)
Suddenly getRandomR1
seems "more general" in the sense that it permits more instances (that repeatedly call getRandom
until the result is in the range, for example) compared to getRandomR2
, which seems to require some sort of reduction technique.
addRand2
is easier to write/read:
addRand1 :: (MonadRandom m) => m (Int -> Int)
addRand1 = do
x <- getRandom
return (+x) -- in general requires `return $ \a -> ...`
addRand2 :: (MonadRandom m) => Int -> m Int
addRand2 a = (a+) <$> getRandom
addRand2
is easier to use:
foo :: (MonadRandom m) => m Int
foo = do
x <- addRand1 <*> (pure 3) -- ugly syntax
f <- addRand1 -- or a two step process: sequence the function, then apply it
x' <- addRand2 3 -- easy!
return $ (f 3) + x + x'
addRand2
is harder to misuse: consider getRandomR :: (MonadRandom m, Random a) => (a,a) -> m a
. For a given range, we can sample repeatedly and get different results, which is probably what we intend. However, if we instead have getRandomR :: (MonadRandom m, Random a) => m ((a,a) -> a)
, we might be tempted to write
do
f <- getRandomR
return $ replicate 20 $ f (-10,10)
but will be very surprised by the result!
I'm feeling very conflicted about how to write monadic code. The "version 2" seems better in many cases, but I recently came across an example where the "version 1" signature was required.*
What sort of factors should influence my design decisions w.r.t. monadic signatures? Is there some way to reconcile apparently conflicting goals of "general signatures" and "natural, clean, easy-to-use, hard-to-misuse syntax"?
*: I wrote a function foo :: a -> m b
, which worked just fine for (literally) many years. When I tried to incorporate this into a new application (DSL with HOAS), I discovered I was unable to, until I realized that foo
could be rewritten to have the signature m (a -> b)
. Suddenly my new application was possible.
This depends on multiple factors:
The key for understanding the difference between Int -> m Int
and m (Int -> Int)
is that in the former case, the effect (m ...
) can depend on the input argument. For example, if m
is IO
, you could have a function that launches n
missiles, where n
is the function argument. On the other hand, the effect in m (Int -> Int)
doesn't depend on anything - the effect doesn't "see" the argument of the function it returns.
Coming back you your case: You take a pure input, generate a random number and add it to the input. We can see that the effect (generating the random number) doesn't depend on the input. This is why we can have signature m (Int -> Int)
. If the task were instead to generate n
random numbers, for example, signature Int -> m [Int]
would work, but m (Int -> [Int])
wouldn't.
Regarding usability, Int -> m Int
is more common in the monadic context, because most monadic combinators expect signatures of the form a -> b -> ... -> m r
. For example, you'd normally write
getRandom >>= addRand2
or
addRand2 =<< getRandom
to add a random number to another random number.
Signatures like m (Int -> Int)
are less common for monads, but often used with applicative functors (each monad is also an applicative functor), where effects can't depend on parameters. In particular, operator <*>
would work nicely here:
addRand1 <*> getRandom
Regarding generality, the signature influences how difficult is to use or to implement it. As you observed, addRand1
is more general from the caller's perspective - it can always convert it to addRand2
if needed. On the other hand, addRand2
is less general, therefore easier to implement. In your case it doesn't really apply, but in some cases it might happen that it's possible to implement signature like m (Int -> Int)
, but not Int -> m Int
. This is reflected in the type hierarchy - monads are more specific then applicative functors, which means they give more power to their user, but are "harder" to implement - every monad is an applicative, but not every applicative can be made into a monad.
It is easy to convert
addRand1
into a function with the same signature as addRand2, but not vice versa.
Ahem.
-- | Adds a random value to its input
addRand2 :: MonadRandom m => Int -> m Int
addRand2 x = fmap (+x) getRand
-- | Returns a function which adds a (randomly chosen) fixed value to its input
addRand1 :: MonadRandom m => m (Int -> Int)
addRand1 = fmap (+) (addRand2 0)
How come this works? Well, addRand1
's job is to come up with a randomly chosen value and partially apply +
to it. Adding a random number to a dummy value is a perfectly good way of coming up with a random number!
I think you might be confused about the quantifiers in @chi's statement. He didn't say
For all monads
m
and typesa
andb
, you can't converta -> m b
tom (a -> b)
∀ m a b. ¬ ∃ f. f :: Monad m => (a -> m b) -> m (a -> b)
He said
You can't convert
a -> m b
tom (a -> b)
for all monadsm
and typesa
andb
.
¬ ∃ f. f :: ∀ m a b. Monad m => (a -> m b) -> m (a -> b)
Nothing stops you from writing (a -> m b) -> m (a -> b)
for a particular monad m
and pair of types a
and b
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With