[Haskell-cafe] Re: GHC: compile using multiple cores?

Peter Verswyvelen bugfact at gmail.com
Thu Apr 9 19:24:38 EDT 2009


That should be fairly easy to optimize I guess? Maybe even using read-only
shared memory to share the parsed database in native binary format? On Fri,
Apr 10, 2009 at 1:08 AM, Andrea Vezzosi <sanzhiyan at gmail.com> wrote:

> The main bottleneck right now is that each ghc process has to read the
> package.conf, which afaiu is done with Read and it's awfully slow,
> especially if you have many packages installed.
> I've started seeing total time improvements when approaching ~300% CPU
> usage and only the extralibs installed.
>
> On Thu, Apr 9, 2009 at 5:51 PM, Neil Mitchell <ndmitchell at gmail.com>
> wrote:
> >> Not with cabal, with GHC, yes: assuming you have enough modules. Use ghc
> >> -M to dump a makefile, and then make -j20 (or whatever you have)
> >
> > There is a performance penalty to running ghc on separate files vs
> > --make. If your number of core's is limited --make may be better. I'd
> > love someone to figure out what the cross over point is :-)
> >
> > As a related question, how does GHC implement -j3? For my programs, if
> > I want to run in parallel, I have to type +RTS -N3. Can I use the same
> > trick as GHC?
> >
> > Thanks
> >
> > Neil
> > _______________________________________________
> > Haskell-Cafe mailing list
> > Haskell-Cafe at haskell.org
> > http://www.haskell.org/mailman/listinfo/haskell-cafe
> >
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.haskell.org/pipermail/haskell-cafe/attachments/20090410/fdff3ab0/attachment.htm


More information about the Haskell-Cafe mailing list