Measuring performance of GHC
Alan & Kim Zimmerman
alan.zimm at gmail.com
Sun Dec 4 19:50:54 UTC 2016
I agree.
I find compilation time on things with large data structures, such as
working with the GHC AST via the GHC API get pretty slow.
To the point where I have had to explicitly disable optimisation on HaRe,
otherwise the build takes too long.
Alan
On Sun, Dec 4, 2016 at 9:47 PM, Michal Terepeta <michal.terepeta at gmail.com>
wrote:
> Hi everyone,
>
> I've been running nofib a few times recently to see the effect of some
> changes
> on compile time (not the runtime of the compiled program). And I've started
> wondering how representative nofib is when it comes to measuring compile
> time
> and compiler allocations? It seems that most of the nofib programs compile
> really quickly...
>
> Is there some collections of modules/libraries/applications that were put
> together with the purpose of benchmarking GHC itself and I just haven't
> seen/found it?
>
> If not, maybe we should create something? IMHO it sounds reasonable to have
> separate benchmarks for:
> - Performance of GHC itself.
> - Performance of the code generated by GHC.
>
> Thanks,
> Michal
>
>
> _______________________________________________
> ghc-devs mailing list
> ghc-devs at haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-devs/attachments/20161204/71dbc958/attachment.html>
More information about the ghc-devs
mailing list