<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>Great! I'm glad to hear folks are interested. </div><div><br></div><div>It sounds like there is need for a better low-dependencies benchmark suite. I was just grepping through nofib looking for things that are <i>missing</i> and I realized there are no uses of atomicModifyIORef, for example.</div><div><br></div><div>What we're working on at Indiana right this second is not quite this effort, but is the separate, complementary, effort to gather as much data as possible from a large swath of packages (high dependency-count) .</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Note that fibon already has bitrotted, and does not quite work any<br>
more. So there is some low hanging fruit in resurrecting that one.<br></blockquote><div><br></div><div><div>Agreed. Though I see that nofib already contains some of them. </div><div><br></div><div>Even though stack + GHC head loses many of stack's benefits, I think that stack and cabal freeze should make it easier to keep things running for the long term than it was with fibon (which bitrotted quickly).</div></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Another important step in that direction would be to define a common<br>
output for benchmark suites defined in .cabal files, so it is easier to<br>
set up things like <a href="http://perf.haskell.org/ghc" rel="noreferrer" target="_blank">http://perf.haskell.org/ghc</a> and <a href="http://perf.haskell" rel="noreferrer" target="_blank">http://perf.haskell</a>.<br>
org/binary for these projects.<br></blockquote><div><br></div><div>Yes, exitcode-stdio-1.0 is useful for testing but not so much for benchmarking. To attempt to harvest Stackage benchmarks we were going to just assume things are criterion and catch errors as we go. Should we go further and aim to standardize a new value for "type:" within benchmark suites?</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
About the harness: <a href="http://haskell.org" rel="noreferrer" target="_blank">haskell.org</a> is currently paying a student (CCed) to<br>
setup a travis-like infrastructure based on gipeda (the software behind<br>
<a href="http://perf.haskell.org" rel="noreferrer" target="_blank">perf.haskell.org</a>) that would allow library authors to very simply get<br>
continuous benchmark measurements. Let’s see what comes out of that!<br></blockquote><div><br></div><div>What's the infrastructure that currently gathers the data for <a href="http://perf.haskell.org">perf.haskell.org</a>? Is there a repo you can point to? (Since gipeda itself is just the presentation layer, and something else must be running things & gathering data.)</div><div><br></div><div>Cheers,</div><div> -Ryan</div><div><br></div><div><br></div></div></div></div>