What type of performance regression testing does GHC go through?
ben at smart-cactus.org
Fri Mar 12 00:21:53 UTC 2021
Tom Ellis <tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk> writes:
> A user posted the following to the ghc-proposals repository. Both JB
> and RAE suggested ghc-devs as a more appropriate forum. Since I have
> no idea whether the user has even ever used a mailing list before I
> thought I would lower the activation energy by posting their message
> for them.
>> Does the GHC release or development process include regression
>> testing for performance?
>> Is this the place to discuss ideas for implementing such a thing and
>> to eventually craft a proposal?
>> I believe the performance impact of changes to GHC needs to be
>> verified/validated before release. I also believe this would be
>> feasible if we tracked metrics on building a wide variety of
>> real-world packages. Using real-world packages is one of the best
>> ways to see the actual impact users will experience. It's also a
>> great way to broaden the scope of tests, particularly with the
>> combination of language pragmas and enabled features within the
We already do this, but help is definitely wanted! In short, every
commit to GHC goes through a variety of performance testing including:
* the performance testsuite in `base` (which I'm sure all GHC
developers are all-too-familiar with at this point)
* a run of the nofib benchmark suite
* compile-time benchmarking using the head.hackage patchset (when it is
In addition to being preserved as CI artifacts, all of this information
also gets thrown into a PostgreSQL database (see ) which is exposed via
Postgrest. The problem is that we currently don't *do* anything with it.
I have occassionally found it useful to do quick queries against it, but
it would be great if someone would step up to help improve this
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 487 bytes
Desc: not available
More information about the ghc-devs