[Haskell-cafe] Re: evaluate vs seq

apfelmus at quantentunnel.de apfelmus at quantentunnel.de
Thu Sep 14 16:20:57 EDT 2006

Michael Shulman wrote:
> On 9/13/06, apfelmus at quantentunnel.de <apfelmus at quantentunnel.de> wrote:
>> So `seq` forces its first argument. When we define
>>   f x = x `seq` (Return x)
>> we thereby get
>>   f _|_    == _|_
>>   f []     == Return []
>>   f (x:xs) == Return (x:xs)
>> To compare, the semantics of (evaluate) is
>>   evaluate _|_    == ThrowException =/= _|_
>>   evaluate []     == Return []
>>   evaluate (x:xs) == Return (x:xs)
>> That should answer your question.
> I must not be phrasing my question very well; I feel like we're
> talking past each other.  It seems to me that when writing actual
> programs (rather than reasoning about denotational semantics) the
> reason one would use `seq' or `evaluate' is to force something to be
> evaluated "now" rather than "later", i.e. to get around Haskell's
> default lazy execution.

> Your semantics say that (x `seq` return x) and (evaluate x) have the
> same result when x is anything other than _|_.  All well and good, but
> (return x) *also* has those same results when x is not _|_.  Why would
> one use the former two rather than (return x), if x is known not to be
> _|_?  Because they evaluate x at different "times", right?  Even though
> the eventual return value is the same, and thus the *semantics* are the
> same.  So laying aside the formal semantics, what is the difference in
> terms of actual, real, Haskell programs?

You are right, I completely forgot that executing a real Haskell program
is a graph reduction and that `seq`'s purpose is to control this
reduction and associated memory usage. But it's possible to use
denotational semantics as an approximation to the actual graph reduction.

To achieve that, we think of _|_ as an *unevaluated expression*, i.e.
something that is not a constructor like (:) or [] in weak head normal
form. This is different from interpreting _|_ as "divergent program" and
coincides with the denotational view that _|_ is a value we don't know
anything about. So to speak, when executing a program which evaluates
the list [1..3], the evaluation steps proceed as follows:
    _|_ , 1:_|_, 1:(2:_|_), 1:(2:(3:_|_)) and finally 1:(2:(3:[]))

In different terms, one might say that a notation like (1:_|_) is a
shorthand for the set of all possible lists that start with 1. The task
of a program is to select a single one out of these by narrowing down
the possible sets. This is the intuition behind why every function must
be monotone (so for example _|_ <= 1:_|_ implies f (_|_) <= f (1:_|_)
and <= is interpreted the way that larger sets are <= smaller sets).

With this in mind the equations
1)  return _|_             == Return _|_
2)  _|_ `seq` (return _|_) == _|_
can be interpreted:
1) when reducing a return-redex (i.e. evaluating it), we get weak-head
normal form without evaluating the argument (which was _|_)
2) when reducing the `seq`-thing but not further evaluating the
arguments (_|_ in our case), we get nowhere (i.e. only _|_)

Thus, when evaluating the expressions (one step!, i.e. just to weak head
normal form), number 1) returns immediately but in 2), the `seq` needs
to evaluate its first argument. So the semantics of applying _|_ to a
function determines its behavior with respect to how much of its
arguments will be evaluated! I think this is the point our
misunderstanding is about?

For (evaluate :: a->IO a), I said something like
3)  evaluate _|_    == ThrowException
and meant, that when evaluating 3) one step to weak head normal form,
the argument does not get evaluated. 2) is much different to that.
Equation 3) is of course wrong and must be more like
    evaluate x == Evaluate x
where (Evaluate x) is a constructor but with a particular behavior when
executed in the IO-Monad.

Note that `seq` and evaluate only force weak head normal form, so
   evaluate (1:undefined) >>= print . head
will not rise an exception.


More information about the Haskell-Cafe mailing list