Looing for advice on profiling

Duncan Coutts duncan.coutts at worc.ox.ac.uk
Tue Nov 9 07:53:51 EST 2004


Hi all,

I'm looking for some advice on profiling and any suggestion on what
might be going on with this program.

I'm profiling c2hs,  Manuel Chakravarty's FFI preprocessor with an eye
to reducing its time and space usage. I'm also trying to add binary
serialisation.

c2hs basically works by parsing a load of C header files, building up
maps of info on function & type names. It then scans a .chs file and
produces a .hs file using the info it gained from the C header files.

Producing the .hs file is fairly quick once all the symbol maps have
been constructed. Parsing the C headers and gathering the info is fairly
slow. So building a project that uses lots of .chs files is quite slow
because it re-parses the header files each time. We originally modified
c2hs to do multiple .chs files in one go so the work could be shared
however this style is not compatible with most build tools (ie
automake). Another approach is to dump a representation of the symbol
maps after parsing the header files and then reading this back for each
.chs file.

So enough background, here's the issues: doing the binary serialisation
seems to be incredibly slow and I can't figure out why.

I'm now using the Binary module pinched from the ghc sources and
instances derived by DrIFT for all the c2hs data structures. I've also
derived NFData instances (DeepSeq) so that I can make sure that the data
to be serialised is already fully evaluated.

I inserted timing points in the program so see what's taking so long.
Here's a sample output:

c2hs/c2hs +RTS -M400m -H300m -RTS -C-I/usr/include/glib-2.0
-C-I/usr/lib/glib-2.0/include --precomp=glib.precomp glib.h
starting c2hs
elapsed time: 0.00 (start)
elapsed time: 0.00 (about to run cpp)
elapsed time: 0.01 (about to parse header)
elapsed time: 1.90 (about to emit warnings)
elapsed time: 1.90 (about to deepSeq header)
elapsed time: 5.24 (about to serialise header)
elapsed time: 52.47 (finnished serialising header)
elapsed time: 52.47 (finnish)

glib.h (and it's includes after cpp'ing) is about 3k lines.
So it takes a few seconds to parse the file and a few more to deepSeq
the whole structure. However after that it takes a long time to
serialise the structure. glib.h is quite small in comparison to
gtk/gtk.h which is about 25k lines. So gtk.h takes ages. In fact it uses
so much memory serialising that it cannot complete on my machine with
-H450m (the max I can set it to on my 512Mb machine without swap
thrashing). When I run it in the original mode that does no
serialisation it completes fine in about 1min.

The binary file produced for glib.h in the end is about 4Mb which is not
huge I'd say. We're using String rather than FastString so that slows
things down a bit and takes more space but even so. (I modified the
Binary module to store lists in length + elems format rather than 0 for
[], 1 for (:) format and chars to use just one byte so strings are not
stored too inefficiently). The file size for gtk.h is about 20Mb (tested
on a machine with more memory).

When I do time profiling, the big cost centres come up as putByte and
putWord. When I profile for space it shows the large FiniteMaps
dominating most everything else. I originally guessed from that that the
serialisation must be forcing loads of thunks which is why it shows up
so highly on the profile. However even after doing the deepSeq before
serialisation, it takes a great deal of time, so I'm not sure what's
going on.

The retainer profiling again shows that the FiniteMaps are holding on to
most stuff.

A major problem no doubt is space use. For the large gtk/gtk.h, when I
run with +RTS -B to get a beep every major garbage collection, the
serialisation phase beeps continuously while the file grows.
Occasionally it seems to freeze for 10s of seconds, not dong any garbage
collection and not doing any file output but using 100% CPU, then it
carries on outputting and garbage collecting furiously. I don't know how
to work out what's going on when it does that.

I don't understand how it can be generating so much garbage when it is
doing the serialisation stuff on a structure that has already been fully
deepSeq'ed.

Any ideas? Where should I direct my investigation? I can supply tarballs
and profiling results and graphs if that would be helpful. Unfortunately
the code base is quite big which makes things difficult.

Duncan



More information about the Glasgow-haskell-users mailing list