[Haskell-cafe] Robert Harper on monads and laziness

Dan Doel dan.doel at gmail.com
Tue May 3 23:52:24 CEST 2011


On Tue, May 3, 2011 at 2:26 AM, Dominique Devriese
<dominique.devriese at cs.kuleuven.be> wrote:
> What I find interesting is that he considers (non-)termination an
> effect, which Haskell does not manage to control like it does other
> types of effects. Dependently typed purely functional languages like
> Coq (or Agda if you prefer ;)) do manage to control this (at the cost
> of replacing general recursion with structural recursion) and require
> you to model non-termination in a monad (or Applicative functor) like
> in YNot or Agda's partiality monad (written _⊥) which models just
> non-termination.

Dependent typing isn't really necessary. Only totality. Of course,
there's some agreement that dependent types help you get back some of
the power you'd lose by going total (by helping you write precise
enough types for your programs to be accomplished in the more limited
recursive manner).

> I have the impression that this separation of the partiality effect
> provides a certain independence of evaluation order which neither ML
> (because of side-effects) nor Haskell (because of non-strict
> semantics) manage to provide. Such an independence seems very useful
> for optimization and parallel purposes.

Total lambda calculi tend to yield the same results irrespective of
evaluation strategy. I guess that's useful for optimization, because
you can apply transformations wherever you want without worrying about
changing the definedness of something (because everything is
guaranteed to be well defined regardless of your evaluation strategy).

I don't see how it obviates strictness analysis, though. For instance:

    sumAcc a (x:xs) = sumAcc (a + x) xs
    sumAcc a [] = a

    ... case sumAcc 0 l of { n -> ... }

Even in a total language, accumulating lazy thunks is likely to be
inefficient for when we go to use the accumulator, whereas one can
also construct examples (even in a total and inductive fragment) where
lazy evaluation will be superior. So you need to do analysis to
determine which things should be strict and which should be lazy for
efficiency, even if you aren't obligated to do it to determine whether
your optimizations are semantics-preserving.

-- Dan



More information about the Haskell-Cafe mailing list