Attempt at a real world benchmark
Moritz Angermann
moritz at lichtzwerge.de
Fri Dec 9 09:37:11 UTC 2016
>> Actually, now that I think about it: What about if this were integrated
>> into the Cabal infrastructure? If I specify "upload-perf-numbers: True"
>> in my .cabal file, any project on (e.g.) GitHub that wanted to opt-in
>> could do so, they could build using Travis, and voila!
>>
>
> Post-shower addendum:
>
> If we had the right hooks in Cabal we could even also track the
> *runtimes* of all the tests. (Obviously a bit more brittle because one
> expects that adding tests would cause a performance hit, but could still
> be valuable information for the projects themselves to have -- which
> could be a motivating factor for opting in to this scheme.)
>
> Obviously it would have to be made very easy[1] to compile with GHC HEAD
> on travis for this to have much value for tracking regressions "as they
> happen" and perhaps a "hey-travis-rebuild-project" trigger would have to
> be implemented to get daily/weekly builds even when the project itself
> has no changes.
>
> We could perhaps also marshal a bit of the Hackage infrastructure
> instead? Anyway, loads of variations on this theme. The key point here
> is that the burden of keeping the "being tested" code working with GHC
> HEAD is on the maintainers of said projects... and they already have
> motivation to do so if they can get early feedback on breakage og
> regressions on compile times and run times.
How would we normalize the results? Different architectures, components,
configurations, and work load during cabal runs could influence the
performance measurements, no?
cheers,
moritz
More information about the ghc-devs
mailing list