Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Examples of a monad whose Applicative part can be better optimized than the Monad part

In one discussion I heard that Applicative interface of some parsers is implemented differently, more efficiently than their Monad interface. The reason is that with Applicative we know all "effects" in advance, before the whole effectful computation is run. With monads, effects can depend on values during the computation so this optimization is not possible.

I'd like to see some good examples of this. It can be some very simple parser or some different monad, that's not important. The important thing is that the Applicative interface of such a monad complies with its return and ap, but using the Applicative produces more efficient code.

Update: Just to clarify, here I'm not interested in applicatives that can't be monads. The question is about things that are both.

like image 783
Petr Avatar asked Sep 03 '13 11:09

Petr


People also ask

Could you comfortably explain the difference between a monad and an applicative functor?

Functors apply a function to a wrapped value: Applicatives apply a wrapped function to a wrapped value: Monads apply a function that returns a wrapped value to a wrapped value. Monads have a function >>= (pronounced "bind") to do this.

Are all monads applicative?

Monads are not a replacement for applicative functors Instead, every monad is an applicative functor (as well as a functor). It is considered good practice not to use >>= if all you need is <*>, or even fmap.

Is every monad a functor?

As I understand, every monad is a functor but not every functor is a monad. A functor takes a pure function (and a functorial value) whereas a monad takes a Kleisli arrow, i.e. a function that returns a monad (and a monadic value).

What are the different classes of monads?

Common monadsNondeterminism using List monad to represent carrying multiple values. State using State monad. Read-only environment using Reader monad. I/O using IO monad.


1 Answers

Another example is a strict left fold. You can write an applicative instance which allows you to compose folds so that the resulting fold can be performed on the data in a single pass and constant space. However, the monad instance needs to re-iterate from the beginning of the data for each bind and keep the whole list in memory.

{-# LANGUAGE GADTs #-}  import Criterion.Main  import Data.Monoid import Control.Applicative import Control.Monad  import Prelude hiding (sum)  data Fold e r where     Step :: !(a -> e -> a) -> !a -> !(a -> r) -> Fold e r     Bind :: !(Fold e r) -> !(r -> Fold e s) -> Fold e s  data P a b = P !a !b  instance Functor (Fold e) where     fmap f (Step step acc ret) = Step step acc (f . ret)     fmap f (Bind fld g) = Bind fld (fmap f . g)  instance Applicative (Fold e) where     pure a    = Step const a id     Step fstep facc fret <*> Step xstep xacc xret = Step step acc ret where         step (P fa xa) e = P (fstep fa e) (xstep xa e)         acc = P facc xacc         ret (P fa xa) = (fret fa) (xret xa)      Bind fld g <*> fldx = Bind fld ((<*> fldx) . g)     fldf <*> Bind fld g = Bind fld ((fldf <*>) . g)  instance Monad (Fold e) where     return = pure     (>>=) = Bind  fold :: Fold e r -> [e] -> r fold (Step _ acc ret) [] = ret acc fold (Step step acc ret) (x:xs) = fold (Step step (step acc x) ret) xs fold (Bind fld g) lst = fold (g $ fold fld lst) lst  monoidalFold :: Monoid m => (e -> m) -> (m -> r) -> Fold e r monoidalFold f g = Step (\a -> mappend a . f) mempty g  count :: Num n => Fold e n count = monoidalFold (const (Sum 1)) getSum  sum :: Num n => Fold n n sum = monoidalFold Sum getSum  avgA :: Fold Double Double avgA = liftA2 (/) sum count  avgM :: Fold Double Double avgM = liftM2 (/) sum count  main :: IO () main = defaultMain     [ bench "Monadic"     $ nf (test avgM) 1000000     , bench "Applicative" $ nf (test avgA) 1000000     ] where test f n = fold f [1..n] 

I wrote the above from the top of my head as an example so it might not be the optimal implementation for applicative and monadic folds, but running the above gives me:

benchmarking Monadic mean: 119.3114 ms, lb 118.8383 ms, ub 120.2822 ms, ci 0.950 std dev: 3.339376 ms, lb 2.012613 ms, ub 6.215090 ms, ci 0.950  benchmarking Applicative mean: 51.95634 ms, lb 51.81261 ms, ub 52.15113 ms, ci 0.950 std dev: 850.1623 us, lb 667.6838 us, ub 1.127035 ms, ci 0.950 
like image 52
shang Avatar answered Oct 05 '22 23:10

shang