Attempt at a real world benchmark
mail at joachim-breitner.de
Fri Dec 9 05:00:49 UTC 2016
Am Freitag, den 09.12.2016, 09:50 +0800 schrieb Moritz Angermann:
> let me thank you perusing this!
> > > I am not sure how useful this is going to be:
> > > + Tests lots of common and important real-world libraries.
> > > − Takes a lot of time to compile, includes CPP macros and C code.
> > > (More details in the README linked above).
> > another problem with the approach of taking modern real-world code:
> > It uses a lot of non-boot libraries that are quite compiler-close and
> > do low-level stuff (e.g. using Template Haskell, or stuff like the). If
> > we add that not nofib, we’d have to maintain its compatibility with GHC
> > as we continue developing GHC, probably using lots of CPP. This was
> > less an issue with the Haskell98 code in nofib.
> > But is there a way to test realistic modern code without running into
> > this problem?
> what are the reasons besides fragmentation for a modern real-world test
> suite outside of ghc (maybe even maintained by a different set of people)?
I am not sure what you are saying. Are you proposing the maintain a
benchmark set outside GHC, or did you get the impression that I am
> At some point you would also end up having a matrix of performance
> measurements due to the evolution of the library and the evolution of ghc.
> Fixing the library to profile against ghc will likely end at some point in
> incompatibility with ghc. Fixing ghc will similarly at some point end with
> the inability to compile the library.
My motivation right now is to provide something to measure GHC, so this
would involve fixing the library. And that is what I am worried about:
Too much maintenance effort in keeping this large piece of code
compatible with GHC.
But maybe it is ok if it part of nofib, and hence of GHC, so that every
breaking change in GHC can immediately be accounted for in the
A nice side effect of this might be that GHC developers can get a
better idea of how much code their change breaks.
> What measurements did you collect? Are these broken down per module?
Nothing yet, this is on the TODO list.
> Something I’ve recently had some success with was dumping measurements
> into influxdb (or a similar data point collections service) and hook
> that up to grafana for visualization.
Nice! Although these seem to be tailored for data-over-time, not
data-over-commit. This mismatch in the data model was part of the
motivation for me to create gipeda, which powers
Joachim “nomeata” Breitner
mail at joachim-breitner.de • https://www.joachim-breitner.de/
XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F
Debian Developer: nomeata at debian.org
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 833 bytes
Desc: This is a digitally signed message part
More information about the ghc-devs