[C2hs] Profiling c2hs

Manuel M T Chakravarty chak at cse.unsw.edu.au
Wed May 12 15:35:46 EDT 2004

On Wed, 2004-05-05 at 14:08, Duncan Coutts wrote:
> All,
> I was writing a demo app to add to gtk2hs's small collection; it's a
> viewer for ghc's time profile log files, and I needed a large profile
> file to test it on so I thought I'd try profiling c2hs. Here's what I
> found:
> Looking at allocations (which corresponds quite closely with time):
> Note that these are % of the total cost, not % of their parent cost
> 11.7% is taken by the C & chs lexers which is not unreasonable.
> 77.4% is taken by the CParser
> Within the CParser:
> 72.0% is taken by parseCExtDeclList, within which
> 65.5% is taken by morphTypeNames (and 58.2% of the runtime)
> 31.4% alloc & 26.5% time was taken just by calls to Prelude.elem within
> morphTypeNames.
> In this particular run of c2hs, in which it was processing some large
> gtk+ headers, it called morphTypeNames over 55 million times.

Hmm, I knew that there was a bottleneck in the parser, but I never got
around to profiling it properly.  Thanks for doing that.  To be honest,
I am a bit surprised that morphTypeNames is the culprit, as I
benchmarked it a bit when writing the code initially and it didn't
appear to be a problem, but obviously I overlooked something.

> I narrowed this down a bit more and found that it is the call to
> morphTypeNames from parseCExtDeclList that takes the time, the call from
> parseCHeader is insignificant. Furthermore, it is the recursive calls to
> morphTypeNames that is expensive; parseCExtDeclList only called
> morphTypeNames about 1000 times (compared to 55 million recursive
> calls).
> So what's going and and what can we do?
> If I understand the code correctly, the reason we are doing this
> morphTypeNames is because we can't distinguish typedef'ed identifiers in
> the parser, we need to remember which identifiers were typedefs and then
> treat them differently in the parser later if they were.


>  At the moment
> this is done by after parsing each declaration (which might contain
> typedefs) modifying the remaining stream of tokens changing all
> identifiers that have been discovered to be typedefs from ordinary
> identifier tokens to typedef-identifier tokens. This is expensive.

To be honest, I am not entirely convinced that the basic idea of
modifying the token stream to rewrite identifier tokens is that
expensive.  I suspect it is the naive way in which morphTypeNames does
the job that's the problem.

> Perhaps it would be possible to add this set of typedef identifiers to
> the parser state monad and thread it through to the parsers for cid and
> typedefName which could consult this parser state and match
> conditionally on the identifier token being in the typedef-identifier
> set or not. (cidOrTN would not need to to a lookup) This would avoid
> traversing the token stream after parsing each declaration.

I guess, it would be possible to interleave this with the parser, but
what I like about the current setup is that it is more modular.  I don't
think that it is a problem to traverse the token stream.  

The problem with the current setup is that morphTypeNames is interleaved
with parseCExtDeclList; ie, with each call to parseCExtDEclList that
reads so a typedef one more morphTypeNames-filter is added to the token
stream.  In other words, after N typedefs, each token passes through N
instances of morphTypeNames.  Instead, we want a single instance of
morphTypeNames that collects all the typedef'd names in a set and
rewrites token accordingly.  If the set of represented as a balanced
tree (ie, using the "Set" module in "base/general/Sets.hs") we be able
to remove a factor of log N from the asymptotic complexity.  This would
be a much more localised change than changing the parser state.

What do you think?


PS: Axel, you are right that the large memory footprint is the problem,
but in a lazy language that is usually pretty closely correlated with
runtime...and I knew for quite a while that the problem is in the
parser.  So, Duncan's find seems very relevant.

More information about the C2hs mailing list