[Haskell-cafe] Practical Haskell question.
Tomasz Zielonka
tomasz.zielonka at gmail.com
Mon Jun 25 05:29:24 EDT 2007
On Mon, Jun 25, 2007 at 10:58:16AM +0200, Henning Thielemann wrote:
>
> On Mon, 25 Jun 2007, Tomasz Zielonka wrote:
>
> > On Mon, Jun 25, 2007 at 10:29:14AM +0200, Henning Thielemann wrote:
> > > Imagine all performActions contain their checks somehow. Let
> > > performActionB take an argument.
> > >
> > > > do
> > > > x <- performActionA
> > > > y <- performActionB x
> > > > z <- performActionC
> > > > return $ calculateStuff x y z
> > >
> > > Now performActionB and its included check depend on x. That is, the check
> > > relies formally on the result of performActionA and thus check B must be
> > > performed after performActionA.
> >
> > IIUC, this limitation of Monads was one of the reasons why John Hughes
> > introduced the new Arrow abstraction.
>
> How would this problem be solved using Arrows?
Maybe it wouldn't. What I should say is that in a Monad the entire
computation after "x <- performActionA" depends on x, even if it doesn't
use it immediately. Let's expand the do-notation (for the earlier
example):
performActionA >>= \x ->
performActionB >>= \y ->
performActionC >>= \z ->
return (calculateStuff x y z)
If you wanted to analyze the computation without executing it, you would
start at the top-level bind operator (>>=).
performActionA >>= f
and you would find it impossible to examine f without supplying it some
argument. As a function, f is a black box.
With Arrows it could be possible to inspect the structure of the
computation without executing it, but it might be impossible to write
some kinds of checks.
Anyway, I have little experience with Arrows, so I can be wrong, and
surely someone can explain it better.
Best regards
Tomek
More information about the Haskell-Cafe
mailing list