[Haskell-cafe] space leaks and optimizations

Alexei Kitaev kitaev at iqi.caltech.edu
Sat Jan 9 05:23:12 EST 2010


Dear David,

Thank you for your comments. I understand your suggestion that a program
should contain some "synchronization points" where the values are
evaluated. But this is not easy. Maybe I should just practice more.

I while ago I played with the strict and lazy ST monads and was able to
achieve the desired behavior, though with difficulty. Unfortunately,
there is no universal solution, and one cannot always use standard
idioms. For example, the lazy version of ST (and State) has the problem
you described, while the strict version exhibits a space leak when used
with mapM or sequence. Simply put, one cannot construct a lazy producer
out of a strict state. I wrote a special version of sequence, which uses
ST.Lazy but forces the evaluation of the implicit state at each step:

import Control.Monad
import Control.Monad.ST
import qualified Control.Monad.ST.Lazy as Lazy

seqLST :: [Lazy.ST s a] -> Lazy.ST s [a]
seqLST xs = syncLST (f xs)
    where
      f [] = return []
      f (x:xs) = liftM2 (:) x (seqLST xs)

syncLST :: Lazy.ST s a -> Lazy.ST s a
syncLST = join . Lazy.strictToLazyST . Lazy.lazyToStrictST . return

For the lazy State monad,
syncLState m = State $ \s -> s `seq` runState m s

I am not completely satisfied with this solution. For one thing, it's
slow and can hardly be optimized. It's also not clear whether one should
 evaluate the input state as I did (there was some reason but I don't
remember) or the output state. I think that a better idea would be to
use strict (ST s) together with a lazy continuation, (STC s a). It's
basically a function s -> a, but one has to wrap it with some
GHC-specific types like State#. Mathematically, (STC s a) is an algebra
over the (ST s) monad. I'll try that when I have time.

More generally, I am looking for reliable and easy to use tools and
idioms for space-conscious programming. I appreciate the design of the
strict ST and State monads. In particular, ST has a nice invariant that,
at any point, the implicit state is fully evaluated. However, I spend a
lot of time figuring how the strict and lazy monads actually work. Is
there any documentation? Reading the discussion related to your blog, I
realized that strict State is different in that it does not actually
force the state. But forcing can be achieved by wrapping all actions
with the following function:

sState :: (s -> (a,s)) -> State s a
sState f = State $ \s -> case f s of
                             (a,s') -> s' `seq` (a,s')

I hope that somebody will answer my other questions about the
operational semantics and optimizations.

-Alexei

David Leimbach wrote:
>>
>> 2) While each step is predictable, the overall behavior of a lazy
>> program can be rather surprising. So one must be very careful. GHC
>> provides two ways to control the evaluation order, seq and bang
>> patterns, but I am not sure which of these (if any) is the right tool.
>> Consider the following example (from the Real World Haskell book):
>> mean :: [Double] -> Double
>> mean xs = sum / fromIntegral num where
>>    (num,sum) = foldl' f (0,0) xs :: (Int, Double)
>>    f (n,s) x = (n+1, s+x)
>> Although it uses foldl', there is a space leak because Haskell tuples
>> are not strict. There are two possible fixes:
>>    f (!n,!s) x = (n+1, s+x)
>> or
>>    f (n,s) x = let n1=n+1; s1=s+x in n1 `seq` s1 `seq` (n1,s1)
>> The first one is short, but less natural than the second one, where the
>> offending thunks are suppressed on the spot. Unfortunately, the syntax
>> is awkward; it would be nice to write something like
>>    f (n,s) x = (!n+1, !n+1)
>> Well, I am becoming too grumpy, perhaps the bang patterns are fine. More
>> important question: are there established practices to *avoid* space
>> leaks rather than fixing them afterwards?
>>
>>
> I believe the expectation is to learn to not be surprised in the areas where
> lazy or non-strict evaluation can be overused, and to learn all the
> advantages of non-strict evaluation vs strict, and the power it gives, such
> that an imperative programmer doesn't feel surprised or angry when things go
> wrong.
> 
> I blogged about writing a long running service in Haskell that ran into
> problems with the lazy State monad, and lazy Data.Map, and I discussed how I
> had to force evaluations of everything to get the program under control.
>  This wasn't for a hobby, this was for a production system.  I believe I've
> a much better handle on strict vs non-strict than when I started the
> project, but I felt pretty lost for a while  in the process of doing it.
> 
> I was even using the Maybe monad with it's MonadPlus implementation to avoid
> using case statements around deconstruction, which I'm sure exacerbated some
> of my problem.  However, since Haskell will evaluate the outer-most stuff
> first, the trick seems to be to find the point at which you *really* need
> the values computed, then tell Haskell to get on with it.  You kind of have
> to have an explicit sequence point where all things need to be resolved, and
> you have to be able to see those in your code.  Sometimes you can get away
> with only doing small pieces at a time.
> 
> I had about the worst situation I've ever seen for data growth in my code.
>  I had a pile of non-strict expressions, that were all dependencies for the
> next, running forever, and never getting evaluated except at asynchronous
> and user-controlled moments.  If these expressions had been evaluated
> strictly, they would have taken up constant space, but since they were all
> thunks, I got linear data growth over time, until I blew up.
> 
> Some advice I've gotten since then was to think about using case for
> strictness rather than explicitly using seq.  Turns out case's pattern
> matching is pretty strict, and that you can often get by with that.  I
> haven't spent a lot of time with core output, but my understanding is that
> it's all let and case.
> 
> 
> 
>> 3) The standard library was not designed with space efficiency in mind;
>> for example, there is sum but no sum'.
>>
> 
> Actually I think that the standard library was designed to be consistent
> with the way the language is documented to behave.  That is to say that it's
> non-strict by default everywhere it's possible to be so.
>  Control.Monad.State selects Control.Monad.State.Lazy by default instead of
> Control.Monad.State.Strict, but both exist.
> 
> Yes, in some cases there's no strict equivalent provided, but is writing a
> strict sum really a big problem?  I think there's stricter folds included
> because they're not quite as trivial, but once you have a strict fold isn't
> strict sum pretty easy?  I suppose the type of the contained element in the
> list could make a big difference in whether the strict fold is strict
> enough, but where do you stop providing strict versions of functions for
> people?  It seems a line must be drawn somewhere, and the right solution is
> to properly educate Haskell programmer about both the power and drawbacks of
> non-strict evaluation, and when it's really necessary to turn things off.
>  Giving examples is fine, but one must learn to see the patterns where there
> is a problem that could brew.
> 
> Real World Haskell teaches us about the profiling tools that helped me
> uncover my problems.
> 
> 
>>
>> Best regards,
>> Alexei
>>
>> _______________________________________________
>> Haskell-Cafe mailing list
>> Haskell-Cafe at haskell.org
>> http://www.haskell.org/mailman/listinfo/haskell-cafe
>>
> 




More information about the Haskell-Cafe mailing list