Measuring compiler performance

Andreas Klebinger klebinger.andreas at gmx.at
Mon Apr 6 09:03:07 UTC 2020


Hi Simon,

things I do to measure performance:

* compile nofib/spectral/simple/Main.hs, look at instructions (perf) and
allocations/time (+RTS -s)
* compile nofib as a whole (Use NoFibRuns=0 to avoid running the
benchmarks). Look at compile time/allocations.
* compile Cabal the library (cd cabal-head/Cabal && ghc Setup.hs
-fforce-recomp). Look at allocations time via +RTS -s or instructions
using perf.
* compile a particular files triggering the case I want to optimize

In general:
Adjust depending on flags you want to look at. If you optimize the
simplifier -O0 will be useless.
If you optimize type-checking -O2 will be pointless. And so on.

In general I only compile as linking adds overhead which isn't really
part of GHC.

> Another question regarding performing compiler perf measurements
> locally is which build flavour to use: So far I have used the "perf"
> flavour. A problem here is that a full build seems to take close to an
> hour. A rebuild with --freeze1 takes ~15 minutes on my machine. Is
> this the right flavour to use?
Personally I use the quick flavour, freeze stage 1 and configure hadrian
to pass -O to stage2
unless I know the thing I'm working on will benefit significantly from -O2.

That is if I optimize an algorithm -O2 won't really make a difference so
I use -O.
If I optimize a particular hotspot in the implementation of an algorithm
by using
bangs it's worthwhile to look at -O2 as well.

You can also set particular flags for only specific files using
OPTIONS_GHC pragmas.
This way you can avoid compiling the whole of GHC with -O/-O2.

> Ideally I wouldn't have to perform these measurements on my local
> machine at all! Do you usually use a separate machine for this? _Very_
> convenient would be some kind of bot whom I could tell e.g.
I use another machine. Others only look at metrics which are less
affected by system load like allocations.

> Ideally I wouldn't have to perform these measurements on my local
> machine at all! Do you usually use a separate machine for this? _Very_
> convenient would be some kind of bot whom I could tell e.g.
Various people have come up with scripts to automate the measurements on
nofib which get's you
closer to this. I discussed with ben and others a few times in the past
having a wider framework for
collecting compiler performance indicators. But it's a lot of work to
get right and once the immediate
need is gone those ideas usually get shelved again.
> BTW what's the purpose of the profiled GHC modules built with this
> flavour which just seem to additionally prolong compile time? I don't
> see a ghc-prof binary or similar in _build/stage1/bin.
As far as I know if you compile (non-ghc) code using -prof then you will
need the ghc library
available in the prof way. But it would be good to have the option to
disable this.

> Also, what's the status of gipeda? The most recent commit at
> https://perf.haskell.org/ghc/ is from "about a year ago"?
I think the author stopped maintaining it after he switched jobs. So
it's currently not useful
for investigating performance. But I'm sure he wouldn't object if anyone
were to pick it up.

Cheers Andreas

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-devs/attachments/20200406/23a51b98/attachment.html>


More information about the ghc-devs mailing list