<div dir="ltr"><div>Hi Andreas,</div><div><br></div><div>I similarly benchmark compiler performance by compiling Cabal, but only occasionally. I mostly trust ghc/alloc metrics in CI and check Cabal when I think there's something afoot and/or want to measure runtime, not only allocations.<br></div><div><br></div><div>I'm inclined to think that for my purposes (testing the impact of optimisations) the GHC codebase offers sufficient variety to turn up fundamental regressions, but maybe it makes sense to build some packages from head.hackage to detect regressions like <a href="https://gitlab.haskell.org/ghc/ghc/-/issues/19203">https://gitlab.haskell.org/ghc/ghc/-/issues/19203</a> earlier. It's all a bit open-ended and I frankly think I wouldn't get done anything if all my patches would have to get to the bottom of all regressions and improvements on the entire head.hackage set. I somewhat trust that users will complain eventually and file a bug report and that our CI efforts mean that compiler performance will improve in the mean. <br></div><div><br></div><div>Although it's probably more of a tooling problem: I simply don't know how to collect the compiler performance metrics for arbitrary cabal packages.<br></div><div>If these metrics would be collected as part of CI, maybe as a nightly or weekly job, it would be easier to get to the bottom of a regression before it manifests in a released GHC version. But it all depends on how easy that would be to set up and how many CI cycles it would burn, and I certainly don't feel like I'm in a position to answer either question.</div><div><br></div><div>Cheers,<br></div><div>Sebastian<br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Am Mi., 20. Jan. 2021 um 15:28 Uhr schrieb Andreas Klebinger <<a href="mailto:klebinger.andreas@gmx.at">klebinger.andreas@gmx.at</a>>:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hello Devs,<br>
<br>
When I started to work on GHC a few years back the Wiki recommended<br>
using nofib/spectral/simple/Main.hs as<br>
a test case for compiler performance changes. I've been using this ever<br>
since.<br>
<br>
"Recently" the cabal-test (compiling cabal-the-library) has become sort<br>
of a default benchmark for GHC performance.<br>
I've used the Cabal test as well and it's probably a better test case<br>
than nofib/spectral/simple/Main.hs.<br>
I've started using both usually using spectral/simple to benchmark<br>
intermediate changes and then looking<br>
at the cabal test for the final patch at the end. So far I have rarely<br>
seen a large<br>
difference between using cabal or spectral/simple. Sometimes the<br>
magnitude of the effect was different<br>
between the two, but I've never seen one regress/improve while the other<br>
didn't.<br>
<br>
Since the topic came up recently in a discussion I wonder if others use<br>
similar means to quickly bench ghc changes<br>
and what your experiences were in terms of simpler benchmarks being<br>
representative compared to the cabal test.<br>
<br>
Cheers,<br>
Andreas<br>
_______________________________________________<br>
ghc-devs mailing list<br>
<a href="mailto:ghc-devs@haskell.org" target="_blank">ghc-devs@haskell.org</a><br>
<a href="http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs" rel="noreferrer" target="_blank">http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs</a><br>
</blockquote></div>