[Haskell-cafe] Monads in Scala, XSLT,
Unix shell pipes was Re: Monads in ...
Gregory Woodhouse
gregory.woodhouse at sbcglobal.net
Mon Nov 28 10:01:05 EST 2005
On Nov 27, 2005, at 2:36 PM, Bill Wood wrote:
> (I'm going to do a lazy permute on your stream of consciousness;
> hope it
> terminates :-).
>
> I think the Rubicon here is the step from one to many -- one
> function/procedure to many, one thread to many, one processor to
> many, ... . Our favorite pure functions are like the Hoare triples
> and
> Dijkstra weakest preconditions of the formal methods folks in that the
> latter abstract from the body of a procedure to the input-output
> relation it computes; both the function and the abstracted
> procedure are
> atomic and outside of time.
Right. And the key to working with Hoare triples is that they obey a
natural composition law. I can go from P and Q to P;Q as long as the
pre- and post-conditions "match up". It's less clear that such a
simple logic is even possible for concurrent programs, particularly
in a deterministic setting.
> After all, aren't "referential
> transparency" and "copy rule" all about replacing a function body with
> its results?
Is that really a "copy rule" or is it a requirement that the program
obey some type of compositional semantics? Put a little differently,
I think your terminology here is a bit leading. By "copy rule", I
think you have in mind something like beta reduction -- but with
respect to whom? If there is some kind of agent doing the copying
that we think of as a real "thing", isn't that a process?
> Well, as soon as there are two or more
> functions/procedures in the same environment, the prospect of
> interaction and interference arises, and our nice, clean,
> *comprehensible* atemporal semantics get replaced by temporal logic,
> path expressions, trace specifications, what have you.
Right, because our execution threads become little lambda universes
interacting with their respective environments (i.e., communicating)
> Some notion of
> process is inevitable, since now each computation must be treated
> as an
> activity over time in order to relate events that occur doing the
> execution of one computation with the events of another.
You may be right, but I suppose I'm stubborn and haven't quite given
up yet. Think about temporal and dynamic logic as being, in some
sense, alternative program logics. They are both useful, of course,
but differ in where the "action" is. For temporal logic, the primary
dimension is time. Will this condition always hold? Will it hold at
some time in the future? But in dynamic logic, the "action" is
program composition. Even so, if you write [alpha]P, then you assert
that P is satisfied by every execution (in time?) of P, but you do
not otherwise reason about program execution. In terms of Kripke
("possible worlds") semantics, what is your accessibility relationship?
>
> Functional programming gives us the possibility of using algebra to
> simplify the task of reasoning about single programs. Of course,
> non-functional procedures can also be reasoned about algebraically,
> since a procedure P(args) that hammers on state can be adequately
> described by a pure function f_P :: Args -> State -> State. The
> problem
> is, of course, that the state can be large.
Right, but Kripke semantics gives us a language in which to talk
about how state can change. Better, can subsystems be combined in
such a way that state in the larger system can can naturally be
understood in terms of state in the subsystems?
>
> But the functional paradigm offers some hope for containing the
> complexity in the world of many as it does in the world of one. I
> think
> combining formalisms like Hoare's CSP or Milner's CCS with
> computations
> gives us the possibility of doing algebra on the temporal event
> sequences corresponding to their interactions; the hope is that
> this is
> simpler than doing proofs in dynamic or temporal logic. Using
> functional programs simplifies the algebraic task by reducing the size
> of the set of events over which the algebra operates -- you consider
> only the explicitly shared parameters and results, not the implicitly
> shared memory that can couple non-functional procedures.
But isn't this true because interaction between the "pieces" is more
narrowly constrained? An algebraic analog might be a semidirect
product of groups. Unlike the special case of direct products, there
is some interference here, but it is restricted to inner
automorphisms (conjugation).
>
> It is conceivable that you can get your compositionality here as well.
> Suppose we package computations with input-output parameter
> specifications and CSP-like specifications of the pattern of event
> sequences produced when the computation executes. It may be
> possible to
> reason about the interactions of the event sequences of groups of
> packages, determine the event sequences over non-hidden events
> provided
> by the composite system, etc.
>
> As far as Bulat's comment goes, I'm mostly in agreement. My dataflow
> view was really driven by the intuition that a functional program
> can be
> described by a network of subfunctions linking outputs to inputs;
> cross
> your eyes a little and voila! A dataflow network.
And I quite liked the data flow concept. That may be what I'm looking
for, too, but I need to let it "sink in" a bit.
> And if we're smart
> enough to make a compiler do that, why bother the programmer?
Good question. In fact compiler design has really influenced my
thinking here. We can eliminate tail recursion automatically, so why
bother the programmer? Redundant reads from a provably unchanged
variable can be eliminated, so why bother the programmer? We can even
optimize (some) loops for parallel execution on a multiprocessor --
something which is perhaps a bit more on point.
> But
> you're not talking about analyzing a function into a
> parallel/concurrent/distributed implementation; rather, you're
> interested in synthesizing a temporal process out of interacting
> computations.
Not exactly. I'm thinking about them as dual aspects of the same
problem: analysis and synthesis. You may recall that I suggested that
"programs" for a distributed system might be compiled as a whole,
much as a modern compiler might generate code capable of using the
possibilities of parallelism in the target architecture. But it seems
to me that a satisfactory theory ought to provide some insight into
how the pieces fit together, too. Just knowing how to generate them
isn't enough.
>
> The temporal aspect won't go away. And that's the problem.
I agree with you -- on both counts.
>
> -- Bill Wood
>
>
===
Gregory Woodhouse
gregory.woodhouse at sbcglobal.net
"And the end of all our exploring
will be to arrive where we started
And know the place for the first time"
-- T.S. Eliot
More information about the Haskell-Cafe
mailing list