Slow compile times

Simon Marlow simonmar@microsoft.com
Fri, 2 May 2003 11:29:22 +0100


=20
> On Fri, May 02, 2003 at 02:56:45AM +0200, Peter Simons wrote:
> > Kirsten Chevalier writes:
> >=20
> >  > I'm running GHC with the following options:
> >  > -fallow-overlapping-instances -cpp -O1 -fvia-C=20
> -fglasgow-exts -package text
> >  >  -dcore-lint -prof -auto-all +RTS -K48M -M1024M -RTS=20
> -fno-strictness=20
> >  >  -Rghc-timing
> >=20
> > Did you try setting a bigger "suggested heap size" for GHC with
> > the run-time-system option "-H"? I found that setting something
> > like "-H128M" affected compile times positively on my machine.
> >=20
> > I hope this helps ...
> >=20
>=20
> I tried compiling the same file with -H512M -M512M instead of -M1024M,
> keeping all other options the same. I got the following results:
>=20
> <<ghc: 3763646968 bytes, 33 GCs, 17879484/44290920 avg/max=20
> bytes residency (6
> samples), 528M in use, 0.00 INIT (0.83 elapsed), 54.89 MUT=20
> (116.82 elapsed),
> 9.58 GC (11.56 elapsed) :ghc>>
>=20
> With -O0, I get:
>=20
> <<ghc: 1482033536 bytes, 26 GCs, 9519452/29147696 avg/max=20
> bytes residency (5
> samples), 515M in use, 0.00 INIT (0.28 elapsed), 15.27 MUT=20
> (69.29 elapsed),
> 2.63 GC (2.92 elapsed) :ghc>>
>=20
> This is better than before, but it still seems pretty slow; any other
> suggestions?

The main thing to reduce GC time is to use -H<size>, as suggested.  We
do have some compilation speed "issues" with large source files, which I
hope we'll get around to investigating at some point.

You're using -prof, which precludes use of the native code generator.
If you can do without -prof, then -fasm should improve compilation times
a lot without affecting performance of the compiled program too much.

Cheers,
	Simon