[Haskell-cafe] How can we detect and fix memory leak due to lazyness?

Ahn, Ki Yung kyagrd at gmail.com
Mon Aug 7 22:34:51 EDT 2006


On 8/7/06, Spencer Janssen <spencerjanssen at gmail.com> wrote:
>
> Forcing evaluation using (==) is a bit of a hack.  Luckily, we have a
> better function to force evaluation: seq (which has type a -> b -> b).
>  "seq x y" evaluates "x" to weak head normal form before returning
> "y".
>
> Let's try another feature of Haskell to force evaluation: strict data
> fields.  A ! in front of a field in a data declaration signifies
> strictness.  In the example below, whenever we construct a value with
> TT, the second argument is evaluated.
>
> \begin{code}
> data TT a b = TT a !b
> \end{code}
>
> Perhaps your instances will work correctly with this data declaration?

Surely I've tried that.

Unfortunately seq and the strict data declaration is not helpful in general.
They are only helpful on base values such as Int or Bool.
What they do is just making sure that it is not a thunk.
That is if it was a list it would just evaluate to see the cons cell
but no further.

Someone wrote a deepSeq module for forcing deep evaluation, which is
like doing self equality strictness hack like x==x.
However, we should be able to locate what is the source of the memory
leak to apply such strictness tricks.
I've tried plugging in x==x like hack almost everywhere I could but
still hard to find the right hack.


I think this is one of the most frustrating drawbacks developing
software in lazy languages like Haskell.
I am a fan of lazy langnauge; I like laziness and infinite data
structures and clean semantics.
But this is really painful. We have confidence that Haskell programs are robust.
It seems it is too easy to blow up the memory or overflow the stack
without intention.

-- 
Ahn, Ki Yung


More information about the Haskell-Cafe mailing list