[Haskell-cafe] Preventing sharing

Tom Ellis tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk
Thu Dec 24 10:50:03 UTC 2015

On Tue, Dec 22, 2015 at 09:24:37PM +0900, Oleg wrote:
> Once I mention the book I must point out Chap 23 of the book (near the
> end). It should be the required read. The introduction to the section
> contains the following emphasized statement (emphasized by the author,
> Simon Peyton Jones):
>    A major weakness of functional languages is the difficulty
>    of reasoning about their space and time behavior.

Let me start by saying that I, too, am very skeptical of the value of
lazy-evaluation-by-default.  However, regardless of my support for your
conclusion, I don't agree with your line of reasoning in this case.

> Let us take a step back. The article on my web page noted the great
> difficulty of writing AI search program in Haskell because the search
> tree is evaluated lazily: whenever a node is evaluated, its result is
> memoized for further use. That is precisely the wrong thing to do for
> such problems. Again, this problem is precisely of lazy evaluation (as
> defined in the book below).The obvious way to avoid memoization was
> to introduce thunks -- which didn't work. The article then developed a
> more involved solution. Yes, -no-full-laziness would have worked just
> as well. However, the solution in the article works on the
> case-by-case basis whereas -no-full-laziness affects the whole
> compilation unit. It is for that reason that I pointed out the article
> in this discussion.
> Let us turn to the problem of the "optimization" that we are
> discussing. Does it have anything to do with laziness? Yes, it
> does. Please see Chap 15 of the excellent book
> http://research.microsoft.com/en-us/um/people/simonpj/papers/slpj-book-1987/
> which explains the full laziness.

In order to demonstrate that the problem you are describing is unique to
lazy-by-default languages, you need to explain why it does not occur in
strict-by-default languages.

The motivating example in the excellent book is that in the program

    f = g 4
    g x y = y + (sqrt x)

    (f 1) + (f 2)

the (sqrt 4) expression is evaluated twice.  Thus the "full laziness
transformation" (the name is misleading!) rewrites the program to

    f = g 4
    g x = let sqrtx = sqrt x in \y = y + sqrtx

    (f 1) + (f 2)

To prove your point you now need to explain why

1. the full laziness transformation (again: misleading name!) is
   required in lazy-by-default languages

2. the full laziness transformation (don't be misled by the name) is not
   required in strict-by-default languages

Personally I can't see why either 1 or 2 is true.  Can you help me out?

Or could you answer an essentially equivalent question?

* If Haskell had never had the full laziness transformation (very misleading
  name!) you would not have seen the space leak in your iterative deepening
  implementation.  We would have gained some predictability in space usage. 
  But what would have been lost?

My answer to the question is "we would have lost pretty much nothing". 
Anyone who wants the full laziness transformation (poor name) can implement
it herself.  What's your answer?


More information about the Haskell-Cafe mailing list