GHC build times on newer MacBook Pros?
qdunkan at gmail.com
Sat Aug 27 01:35:02 CEST 2011
On Tue, Aug 23, 2011 at 10:24 AM, David Terei <davidterei at gmail.com> wrote:
> I have a 16 core machine at work (with 48GB of ram, a perk of the job
> :)). GHC can saturate them all. Can validate GHC in well under 10
> minutes on it.
To wander a bit from the topic, when I first saw this I thought "wow,
ghc builds in parallel now, I want that" but then I realized it's
because ghc itself uses make, not --make. --make's automatic
dependencies are convenient, but figuring out dependencies on every
build and not being parallel means make should be a lot faster. Also,
--make doesn't understand the hsc->hs link, so in practice I have to
do a fair amount of manual dependencies anyway. So it inspired me to
try to switch from --make to make for my own project.
I took a look at the ghc build system and even after reading the
documentation it's hard for me to understand. The first issue is how
to get ghc -M to understand hsc2hs? My workaround was to fetch *.hsc,
and have 'make depend' depend on $(patsubst %.hsc, %hs, $(all_hsc)) so
that by the time ghc -M runs it can find the .hs files.
Then the more perplexing issue is that I'm using to using -odir and
-hidir with --make to maintain separate trees of .o and .hi built for
profiling and testing, but I'm not sure how make does that, and
fiddling with VPATH has been unsuccessful so far. Otherwise I could
do wholesale preprocessing of the ghc generated deps file, but it
seems clunky in addition to tripling its size. I know ghc has "ways",
and it's hard for me to read rules/*dependencies* stuff, but I don't
think ghc is doing that. Maybe it just doesn't allow profiling and
non-profiling to coexist?
Maybe I shouldn't be asking make questions on on the ghc list, but
it's related to how ghc does -odir and -hidir and the best way to
build haskell so it's at least somewhat relevant :)
To bring it back to ghc a bit, wouldn't it be nice if there didn't
have to be a tradeoff between fast but awkward to set up vs. slow but
convenient? For larger projects, either make or something with
equivalent power is probably necessary, given C, hsc2hs, etc. all
needing to be integrated, but a wiki page with some make recipes for
ghc could help there a bunch. I'd be happy to put one up as soon as I
figure out the current situation.
Then there's simply making --make faster... I saw a talk about a
failed attempt to parallelize ghc, but it seems like he was trying to
parallelize the compiler itself... why not take the make approach and
simply start many ghcs? You'd have to pay the ghc startup time cost,
but relatively speaking I think that's pretty fast nowadays. Or you
could just do a work-stealing kind of thing where ghc marks a file as
"in progress" and then each ghc tries to grab a file to compile. Then
you just start a whole bunch of 'ghc --make's and let them fight it
More information about the Glasgow-haskell-users