[Haskell-cafe] MonadFix wiki confusion

Albert Y. C. Lai trebla at vex.net
Thu Oct 22 20:14:22 UTC 2015


On 2015-10-18 10:40 AM, Samuel Rødal wrote:
> the MonadFix wiki at https://wiki.haskell.org/MonadFix has a statement
> that I feel is a bit misleading.
>
> In section "2.2 Lazy algorithm interleaved with effects", it claims
> that making the BTree data structure strict doesn't cause endless
> recursion.
>
> Well, that's true, but that's just because rep_x_sum returns a tuple
> containing the BTree and the summed values of the current subtree, and
> the tuple is lazily constructed - postponing the construction of the
> tree value. So highlighting the fact that the function still works
> when the BTree structure is made strict is kind of a red herring.

This shows a wrong concept.

When you write this kind of code:

     return (x+y, a+b)

the tuple is not constructed lazily. The tuple is constructed here and now.

To be sure, the sub-value x+y is constructed lazily, but then you would 
say "x+y is constructed lazily", not "the tuple is constructed lazily". 
Similarly for a+b. And to say both at once, you would say "the tuple 
content is constructed lazily", not "the tuple is constructed lazily".

The tuple itself --- a machine word standing for the data constructor 
(,), followed by a pointer to the thunk for x+y, followed by a pointer 
to the thunk for a+b --- is constructed here and now.

Furthermore, if the code goes like this:

     return (B n x y, a+b)

then not only the tuple, but also the B n x y ("the tree value"), are 
constructed here and now.

Again, to be sure, this code is lazy on n, on x, and on y. But the tree 
value --- a machine word standing for the data constructor B, followed 
by a pointer to n, followed by a pointer to x, followed by a pointer to 
y --- is constructed here and now.

* * *

Guess what, neither can you fall back to "the content of the tuple, 
and/or the content of the tree value, is constructed lazily". Because it 
is readily refuted by an experiment.

And it should be natural to think up this experiment because every time 
you conjecture "this works because the code is lazy on xxx" the first 
instinct should be "so let me run an experiment in which I force 
evaluatation of xxx ASAP to test the conjecture".

So here it goes:

{-# LANGUAGE RecursiveDo #-}

import Control.Exception (evaluate)

data BTree = Z | B Int BTree BTree deriving Show

repsum t = do
     rec (u,s) <- rep_x_sum t s
     putStrLn ""
     return u
  
rep_x_sum Z _ = return (Z, 0)
rep_x_sum (B i l r) s = do
   putStr "("
   (l',sl) <- rep_x_sum l s
   putStr (show i)
   (r',sr) <- rep_x_sum r s
   putStr ")"
   evaluate l'
   evaluate r'
   tree <- evaluate (B s l' r')
   sum <- evaluate (i + sl + sr)
   tuple <- evaluate (tree, sum)
   return tuple

main = repsum (B 4 (B 3 Z Z) (B 5 Z (B 1 Z Z)))
        >>= print


This experiment hastens evaluation of the tuple content and most of the 
tree content; for good measure, evaluate the tuple and the tree, too, 
whether it is redundant or not.

Run this experiment. See it doesn't hang. Use this observation to refute 
many, many hypotheses.

The only content left unevaluated is s. Laziness in s is the only 
laziness needed. But then I already wrote that in the article, didn't I?

"The laziness needed is just in the rep_x_sum algorithm: it does not 
evaluate s."
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/haskell-cafe/attachments/20151022/527e67e5/attachment.html>


More information about the Haskell-Cafe mailing list