[Haskell-cafe] Applicative is like an Arrow
Thiago Negri
evohunz at gmail.com
Fri Aug 16 22:02:51 CEST 2013
You just made my day.
I was trying to understand these things so hard and couldn't get it.
Your analogies were brilliant.
I'll read all links/papers posted here to get a deeper understanding of
these things.
I'll just skip dependently typed stuff for now, heh.
Thank you,
Thiago.
2013/8/16 Mathijs Kwik <mathijs at bluescreen303.nl>
> Thiago Negri <evohunz at gmail.com> writes:
>
> > I just stumbled upon the Applicative term.
> > Arrows are quite difficult for me to understand at the moment.
> > I guess it needs time to digest.
> >
> > But, as I understand so far, Applicative and Arrows looks like the same
> > thing.
> >
> > Please, enlight me.
>
> I would like to point out this paper:
> http://homepages.inf.ed.ac.uk/slindley/papers/idioms-arrows-monads.pdf
>
> In short: arrows are a bit more powerful than idioms (applicative) but a
> bit less than monads. However, power sometimes comes at a price.
> All 3 have to do with combining / sequencing effects, but they differ in
> subtle but important ways. Every idiom is an arrow and every arrow is a
> monad, but not the other way around.
>
> I will first give an overview of the differences, then try to explain
> what I mean... (my terminology might be a bit awkward/wrong)
>
> Idiom:
> Basic combining strategy: i (a -> b) -> i a -> i b
> Sequencing: effects are applied in sequence
> values (stuff "inside") are isolated
> Shape depends on values: no
>
> Arrow:
> Basic combining strategy: a b c -> a c d -> a b d
> Sequencing: effects are applied in sequence
> values are sequenced too
> values can "see" upstream results
> Shape depends on values: static choices only
>
> Monad:
> Basic combining strategy: m a -> (a -> m b) -> m b
> Sequencing: effects are applied in sequence
> values are sequenced too
> values can "see" upstream results
> Shape depends on values: yes, fully dynamic
>
>
> Now, what do I mean by all this?
> Basically these 3 abstractions consist of 3 things:
> - effects
> - values
> - shape
> Effects can be things like "carries state around"(State), "can
> fail"(Maybe), "multiple answers"(List) and more. Values are the pure
> stuff "inside", and what I call 'shape' is the general control flow of a
> computation.
> Furthermore, I visualize these abstractions by thinking of a factory
> hall with boxes (values), people (effects) and an assembly line
> (shape).
>
>
> Idioms are fully static: values cannot see/depend on each other or on
> the result of effects. Basically the computation is split into 2 phases:
> - effects+gather
> - apply gathered results
> example:
> pure (+) <*> Just 3 <*> Just 5
> The first phase just works through the parts (in sequence) and collects
> the (pure) contents. In this case (Maybe) this means looking for the
> Just constructor to continue, or halting on Nothing. The content inside
> is being treated like a black box. It is not made aware of the effects
> (whether or not Nothing was found somewhere) and it is not being
> examined to choose a different codepath.
> Then if everything worked out (no Nothings were found), the collected
> results are taken out of their black boxes and applied. In this phase
> these results (the +, the 3 and the 5) don't know anything about the
> effects that happened.
>
> In "factory visualization": every part of the computation (stuff between
> <*>) is a person that will need to perform some task(effect) and deliver
> some result in a box. They will only start performing their task when
> they see a box passing by from the person upstream. They cannot look in
> that box or make decisions based on it or take it off. At the end of the
> line, some manager receives all the boxes and opens them to combine the
> results.
>
> This is fine for a whole lot of applications and has the advantage that
> the shape of the entire assembly line is clear even before starting
> it. This means (static) optimization can be performed and it's easy to
> reason about the program/costs. Garbage collection (sending workers
> home) is easier, because it's very clear what data is needed where and
> when. I will talk a bit more about these optimizations a bit further
> down. Of course this assembly line is not flexible enough for more
> advanced cases.
>
> Let's see an example of that(State):
> pure const <*> get <*> put 8
> This is a perfectly fine idiom, albeit not very useful.
> When run (with initial state 4) the first worker will package up a box
> with "const" and send it downstream. The second worker gets the seeded
> state from the "state cupboard" and put it in a box (4). When that box
> passes by worker 3, he will walk to the state cupboard and put 8 in
> it. Then to signal he's ready, he packs a box with (). At the end of the
> line, someone opens the boxes "const" "4" and "()", which computes to
> just 4. So we end up with the answer 4 and an updated cupboard
> containing 8.
>
> Why is this not very useful? Well we would probably want to be able to
> put state in that depends on certain stuff we got out earlier, instead
> of just supplying a hard coded 8 that was known before starting the
> line. Unfortunately, this is not possible with idioms as workers cannot
> open each other's boxes.
>
>
> Now, let's skip Arrows for a minute and move straight to Monads:
>
>
> get >>= \x -> put (x + 1) >> return x
> As you can see, monads tackle this issue by putting everything in
> sequence. Not just the effects, but values too. Like this, they can
> "see" upstream values and upstream effects and influence the effects and
> shape of things to come further "downstream".
> Another example (State again):
>
> do x <- get
> if x > 100
> then do put 0
> return "overflow"
> else do put (x+1)
> executeBatch x
> return "normal operation"
>
> This example shows nicely how the entire shape is dynamically chosen
> when using a Monad, influencing both which effects will apply (will the
> batch job run?) and result (status string).
>
> But let's look a bit closer at how Monad performs this trick:
> get >>= (\x -> put (x + 1) >>= (\_ -> return x))
> get >>= (\x ->
> if x > 100 then (put 0 >>= (\_ -> return "overflow"))
> else (put (x+1) >>= (\_ ->
> executeBatch x >>= (\_ ->
> return "normal operation"))))
> I've added parentheses to clearly show the "parts" involved.
> Basically both these cases look like
> get >>= xyz (only 2 parts, get and some unknown xyz)
> So what can we say about xyz? Not a whole lot. xyz is a function that
> will return a new monadic value when given an input. But we cannot
> really statically inspect a function without applying it to an input
> value. So basically we don't know what will happen next until we are
> already running.
>
> To look at it from the factory-visualization perspective, we have
> divided the assembly-line into a lot of separate parts. Worker 1 gets
> the state from the state cupboard and puts it on the line. But the line
> is very short and just flows to worker 2. Worker 2 then receives the
> box, opens it and then starts to reorganize the factory to proceed. He
> might need to place new pieces of assemly-line to new parts of the
> factory, phone workers to come to work, whatever. Basically the entire
> factory is 1 big black box except for the task/effect of worker 1.
>
> This is extremely powerful, as worker 2 can decide truly dynamically
> what to do. But we lose our possibility to optimize things at compile
> time and to reason about the shape and possible outcomes from the
> language itself. Sure, by looking at the sources we (and the compiler)
> can figure out a bit about what's going on, but far less well than with
> idioms, as things really depend on runtime inputs now.
>
>
>
> Now, let's look at Arrows and how they tackle the same problem:
>
> get >>> arr (\x -> (x + 1, x) ) >>> first put >>> arr snd
> or using arrow syntax:
> proc _ -> do
> r <- get -< ()
> put -< r + 1
> returnA -< r
> In factory-visualization: workers can look inside each other's boxes or
> take them off the line.
> Worker 1 gets the state from the cupboard and puts it in a box.
> Worker 2 looks at this box, calculates +1 and updates the state
> cupboard.
> When we run this, starting with 4 in the state cupboard, we get back the
> result (4) at the end of the line, while the state cupboard now contains
> 5 (a value that depended on a previous effect, which was not possible
> with idioms). So what's up with these "arr" and tuples and
> first/seconds/fst/snd parts in the normal (non-arrow) notation? I like
> to think of it as flow control for the assembly line. Like tags being
> put on the boxes to indicate which co-worker they are meant for. Simple
> robotic parts of the line (arr snd) just throw out boxes that aren't
> needed anymore so they don't have to travel all the way to the end of
> the line.
>
> So from this example it's clear that arrows have some extra power over
> idioms, without sacrificing the clarity of the entire factory-flow.
> All static optimizations that are possible with idioms should still work
> with arrows, so basically I think they are just plain better from all
> technical points of view. But syntactically I cannot stand them :)
> Shuffling tuples around constructing, deconstructing them everywhere
> just doesn't lead to readable code and I'm not 100% sure that the
> compiler can fully optimize them away. Arrow syntax helps a lot, but
> compared to an elegant idiom (especially if you have idiom brackets
> available like in idris or using SHE) it's just ugly.
>
> So how about making dynamic choices based on box contents?
> This is possible using ArrowChoice. The existence of this separate class
> probably implies that ArrowChoice cannot always be implemented for every
> Arrow, but it might just be that the designers of the Arrow classes
> wanted to keep things as modular as possible.
>
> I'm not going into the technical details of ArrowChoice (it's a bunch of
> box labeling tricks once again) but in factory-visualization there is a
> big difference between the if-then-else within Monad and the
> if-then-else in Arrow. Remember, for Monad, after making the choice, the
> rest of the factory had to be reorganized and connected before we could
> continue. We, the programmers knew there were only 2 possible outcomes,
> but at runtine everything was handled as if any random new factory setup
> should be possible.
> With Arrows, we can nicely see (before starting) that a certain part of
> the assembly line will make a choice. It has 2 lines flowing from it so
> it's clear that at runtime the worker will have to place the boxes on 1
> of these continuation lines. We can reason about both situations and see
> the remaining layout of the factory and reason about that too. At
> runtime it remains static. No moving heavy machinery around.
>
> I think that in 99% of all situations, this solution is way more elegant
> than the monad solution. The full-dynamism that monads add really sounds
> like overkill. Looking at most code, the possible code-paths *are* known
> beforehand (to programmers), so it's a shame that knowledge gets
> destroyed by monads by using a function (a -> m b) to "bind" the parts.
>
> However, in haskell, there are many nice libraries that provide monadic
> interfaces and not a lot that have Arrow+ArrowChoice. Even when they
> would be perfectly suitable and the dynamics of monad aren't needed.
> So because of this, if you need a few of these libraries for your
> application, you stick to monads, because you don't want to end up with
> many different syntaxes everywhere.
>
>
> Ok, nearing the end. This got a bit bigger than I planned =)
>
>
> Yet another view at the arrow -> monad difference:
>
> class Arrow a => ArrowApply a where
> app :: a (a b c, b) c
>
> ArrowApply is "the last step". If your arrow can implement ArrowApply
> you basically are a monad and you've lost the nice static properties
> that idiom and arrow gave. It's quite clear from the signature of 'app'
> why this is. In this representation it's not a worker himself that
> starts to reorganize the entire factory when he receives his input. In
> this case he just receives a rather big box containing an entire new
> assembly line to be placed in the next part of the factory, and a small
> box to put into that new line. See? The same dynamism from an even more
> extreme point of view :)
>
>
> Another thing I want to point out, because it's closely related:
> Alternative/ArrowPlus/MonadPlus
> These provide the assembly lines with another form of
> choice/combination. Of course you should note that not every
> idiom(Applicative) is an Alternative, and the same is true for Arrow and
> Monad. They all work when the effect has some notion of emptiness and
> there is some way to combine stuff (Monoid). For Maybe, empty is Nothing
> and when we have something (Just) we feel no need to combine. For lists,
> [] is empty and combining is just concatenation.
> So why do I mention this at all? Because it can act like a kind of
> choice (if/then/else) or as a way to traverse multiple codepaths, which
> is especially useful for idioms, as we've just seen these have no
> native ways to do this. Of course you have to remember that this isn't
> really a dynamic choice "inside". But it's not completely steered from
> outside of the factory either. It's somewhat nicely halfway and of
> course limited to "is it empty? / did it succeed?" type of choices only,
> but in many cases (parsing for example) it can just give idioms that
> extra boost it misses, without having to move to arrows/monads.
> I haven't bothered thinking of a factory-visualization analogy for these
> yet, because I'm afraid nobody wants to work there anymore if they have
> to worry of Monoids sneaking around.
>
>
>
> Lastly: what about these static optimizations I keep mentioning?
> It is explained in http://www.cse.chalmers.se/~rjmh/Papers/arrows.pdf
> chapter 3.
> Another nice example, which is more about reasoning about code:
> http://blog.downstairspeople.org/2010/06/14/a-brutal-introduction-to-arrows
>
> Both examples build up static information about a computation/assembly
> line without starting it yet. This information can then be used to
> optimize certain parts (not run an expensive computation if we already
> know the outcome), combine parts (do 2 things at once if we know they
> are closely related) or to refuse to run at all unless certain
> conditions (performance/security) aren't met.
>
>
> But while having worked with dependently typed languages for some time
> now, both these examples feel redundant in a way.
> They both just combine a piece of static data with some dynamic data
> (function). Then on startup, they collapse the static stuff, act upon
> what they've learned(optimize,report back) and leave the dynamic part as
> a result, to run. So in a way they are all about an ordinary function,
> with some fancy extra knowledge about it that needs to be stored
> somewhere, and a phase where this knowledge is used to deliver an
> optimized runtime thingy. Sounds an awful lot like what compilers do,
> right? Now, when these cases were worked out, haskell's type system was
> not sufficient to store the static knowledge and to apply it, but
> currently, with all modern extensions it's getting awfully close, if
> it's not there already.
>
> So perhaps in the near future, we can just supply Monad with advanced
> static knowledge+reasoning to give it all the nice properties that arrow
> and idiom have.
>
> Have a nice weekend,
> Mathijs
>
>
> > _______________________________________________
> > Haskell-Cafe mailing list
> > Haskell-Cafe at haskell.org
> > http://www.haskell.org/mailman/listinfo/haskell-cafe
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/haskell-cafe/attachments/20130816/32db7447/attachment.htm>
More information about the Haskell-Cafe
mailing list