[Haskell-cafe] An interesting paper from Google

Evan Laforge qdunkan at gmail.com
Mon Oct 18 21:37:35 EDT 2010

>> Of course, I'm talking about profiling in time. GHC also enables you
>> to profile in space as well. I'm not actually sure to which one
>> you're referring.
> In general, time profiling. Although the space profiling is useful too,
> it gives you hints on what the (lazy) program does, as opposed to what
> you think it does. The retainer graphs are cool, e.g. you might see that
> some code hangs on to data more than you fought, and you can save some
> heap and GC time due to that.

In my experience, space profiling is more relevant, because the big
time killer is the GC, and the reason for that is space leaks.  That's
also the thing that is essential to fix for interactive apps because
it causes GC hangs.  But while I think the heap profiler is pretty
good, it's still not a trivial matter by any means.

Unfortunately, tracking down space leaks is still very much a trial
and error process for me.  Sometimes it leads me right to a suspicious
thunk and a quick $! or bang pattern fixes it.  But often no matter
how much I experiment strictifying things, the lag, drag, and void is
unaffected.  Sometimes it even causes more drag.

There's also a ghc bug that makes it hang when filtering the heap
profile by biography, so usually that feature is unavailable.

I've had variable results with time profiling too.  As far as I can
see, you can't sort the output by allocation rather than time, and
sometimes it seems like the times don't add up (i.e. 'modify' is n%
time, but all instances in the list are 0.0%)

For instance, currently I have the top consumer of both time and alloc
as 'get', which is 'lift . Monad.State.Strict.get'.  Of course it
occurs in a million places in the complete profile, along with
mysteries like a line with 0 entries 0.7%/0.1 time/alloc.  Doesn't 0
entries mean it was never called?  Meanwhile, a line with 37000
entries has 0.2/0.2.  Is the difference how the 'get'ed value was
used?  And then there's the wider question of how 'get' is taking so
much time and space.  Doesn't it just return a pointer to the State
value?  Biographical profiling shows large amounts of void, lag, and
drag, but no clear way to trace that to the code that is responsible.

And what does it mean when the top consumer is 'lift'?  What's it
*doing* in there?

Maybe time and space profiling is just hard, no matter the tools, or
maybe it's just hard for me.  But it's been the most opaque and
slippery part of haskell for me by far.  Perhaps more people don't
have a problem with it only because there are not that many
interactive long-running haskell apps out there.

Also, I don't know what the time axis on the heap graphs is, but it
appears to be CPU time, because I can't correlate it to the time of
anything else, and if I try forcing things at certain times to see the
effect, there is no visible effect.  The space axis also doesn't
correspond with the max residency on the -s output, and for that
matter, the .prof total alloc doesn't correspond with the 'bytes
allocated on the heap' from -s.  The 'n bytes x seconds' in the heap
graph doesn't seem to correspond with the alloc rate, or anything else
I can see.  But I guess I don't really need to know what these numbers
mean, I just need to know I want them to get smaller :)

My current mystery is a 'sequence_ $ replicate $
run_something_expensive' that shows large amounts of void and drag the
first 5 times... but the 6th time it's all been converted to either
USE or LAG.  The only thing "using" the results is 'DeepSeq.deepseq'.
The actual use is split between a very large number of SCCs and types.

Anyway, I'm not sure where I was going with all this, except that it's
a source of continuing confusion for me and if others have different
or similar experiences I'd be interested to hear about them.  While I
would love better debugging of course, I can always chunk in more
'trace's and write more detailed tests and eventually the bug has
nowhere to hide.  No such luck with space leaks.  If I were a
haskell-using company and could fund tool improvement, it would be in
the area of tools for detecting and tracking down space usage or even
static analysis for space leaks.

More information about the Haskell-Cafe mailing list