Dan Doel dan.doel at gmail.com
Sun Sep 4 20:34:26 CEST 2011

```On Sun, Sep 4, 2011 at 12:24 AM, Ivan Lazar Miljenovic
<ivan.miljenovic at gmail.com> wrote:
> On 4 September 2011 12:34, Daniel Peebles <pumpkingod at gmail.com> wrote:
>> Hi all,
>> For example, if I write in a do block:
>> x <- action1
>> y <- action2
>> z <- action3
>> return (f x y z)
>> that doesn't require any of the context-sensitivty that Monads give you, and
>> could be processed a lot more efficiently by a clever Applicative instance
>> (a parser, for instance).
>
> What advantage is there in using Applicative rather than Monad for
> this?  Does it _really_ lead to an efficiency increase?

Forget about efficiency. What if I just want nicer syntax for some
applicative stuff? For instance, this is applicative:

do x <- fx ; y <- fy ; z <- fz ; pure (x*x + y*y + z*z)

But my only option for writing it to require just applicative is something like:

(\x y z -> x*x + y*y + z*z) <\$> fx <*> fy <*> fz

Even if I had idiom brackets, it'd just be:

(| (\x y z -> x*x + y*y + z*z) fx fy fz |)

Basically the situation boils down to this: applicatives admit a form
of let as sugar:

let
x = ex
y = ey
z = ez
in ...

where the definitions are not recursive, and x is not in scope in ey
and so on. This is desugarable to (in lambda calculus):

(\x y z -> ...) (ex) (ey) (ez)

but we are currently forced to write in the latter style, because
there's no support for the sugared syntax. So if anyone's looking for
motivation, ask yourself if you've ever found let or where useful. And
of course, in this case, we can't just beta reduce the desugared
expression, because of the types involved.

Comprehensions are rather like an expression with a where:

[ x*x + y*y + z*z | x <- ex, y <- ey, z <- ez ]

-- Dan

```