[Haskell-cafe] Re: Problems with strictness analysis?
Claus Reinke
claus.reinke at talk21.com
Wed Nov 5 19:25:49 EST 2008
>> equational /= denotational
>
> Nonetheless, Haskell has equational semantics which are derived from
> its denotational ones. But when I said "equational semantics" I
> really meant something more like "equations" :-).
Unlike Standard ML, for instance, Haskell does not have standard
semantics - a persistent embarrassment. There are semantics for fragments,
but no guarantee that any of them will match any part of the language, let
alone its implementations.
One can base equational semantics on denotational semantics, but that is
not the only way, hence the two are not equal. I was trying to point out
the missing part, where equations are are useful for operational reasoning,
beyond simple denotations.
>> Sometimes, operational semantics based
>> on directed equations give you the best of both worlds: equational reasoning
>> and behavioural guidelines, both at a suitably "clean" and abstract level.
>
> By directed equations you mean unidirectional rewrite rules?
Yes. Think of rewriting (replacing old with new) as a general form
of operational semantics.Within this form, there is a wide range of
possibilities, including rewriting abstract machine states and rewriting
source-level programs. Somewhere near the latter, there is a "most
abstract" form of operational semantics for a language, one for which
every other form adds details that are not inherent in the language, but
are artifacts of the formalisation (details of the abstract machine, or
details of the mathematical domain, or ..).
> But I'd like to back up a bit. I may have been too quick to assign
> the judgement "dirty" to operational thinking. I am not asking this
> question as a complaint, as a challenge on the language, or as any
> other such rhetorical device: my question is earnest. I would like to
> know or to develop a way to allow abstract analysis of time and space
> complexity.
So you want to reason about the way from the original program to its
outputs, not just about the outputs. You can cling to denotations, by
extending mere values with information about the computation leading
to those values, but why not reason about the computation directly.
In logic terms, you want to talk about the proof, not just about the
truth of your theorems. In functional language terms, you want to talk
about reductions, not just about normal forms. This works well for
pure lambda calculus, and has been extended to cover other aspects
of Haskell implementations, such as call-by-need lambda calculus as
a way for evaluation of non-strict programs with sharing.
The call-by-need lambda calculus
John Maraist and Martin Odersky and Philip Wadler
Journal of Functional Programming, 1998
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.44.5259
The idea is to use "let" to represent sharing, and to refine the reduction
rules to take this into account: instead of substituting parameters into
function bodies, parameter bindings are kept in "let"-bindings, where
their evaluation is shared (only values can be substituted, so substitution
is preceded by evaluation, when needed).
The resulting reductions are quite a bit more involved than without
sharing, because all reductions take place within a context (those
"let"-bindings). But that is quite close to what happens in a compiled
graph reduction implementation: those "let"-bindings represent the
heap, the nested contexts correspond to stack.
(there are of course optimizations and representation changes that affect
performance, but the former are themselves expressed as rewrite rules,
and the latter can be accounted for by refining the rewrite system, when
such details are needed - luckily, that isn't often the case)
> On my time working with Haskell, I've developed a pretty accurate
> "intuition" about the performance of programs. It is some meld of
> thinking in terms of the lazy evaluation mechanism, some higher-level
> rules of thumb about when new thunks are allocated, and probably some
> other stuff hiding in the depths of my brain. I know it, but I am not
> satisfied with it, because I can't formalize it. I wouldn't be able
> to write them down and explain to a fellow mathematician how I reason
> about Haskell programs.
>
> The example being discussed in this thread is a good one:
>
> sum [1..10^8] + length [1..10^8]
>
> With Haskell's semantics, we know we can write this as:
>
> let xs = [1..10^8] in sum xs + length xs
>
> But it causes a change in memory *complexity*, so in some sense these
> two sentences are not equal. What is the theory in which we can
> observe this fact? Is it possible to give a simple denotational
> semantics which captures it?
Why do you insist on denotational semantics for reasoning about evaluation?
Denotational semantics are best when all you care about are values.
Have a look at that paper, or the earlier version
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.39.2287
There is also
John Launchbury, A Natural Semantics for Lazy Evaluation,1993
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.2016
which used an explicit heap to capture sharing (the later use of "let"
made it possible to talk about sharing without adding any constructs
not in the language). Either of these, if extended with the necessary
primitives, should distinguish your two terms: in the former, sharing
of the long list never arises (so the computations are independent),
in the latter, that sharing is never lost (so evaluating one sub-term
will evaluate the shared xs, which are kept around until the other
sub-term gets to consuming them) .
Claus
More information about the Haskell-Cafe
mailing list