[Haskell-cafe] Testing of GHC extensions & optimizations

Emil Axelsson 78emil at gmail.com
Mon Sep 3 07:08:12 UTC 2018


Have a look at Michal Palka's Ph.D. thesis:

https://research.chalmers.se/publication/195849

IIRC, his testing revealed several strictness bugs in GHC when compiling 
with optimization.

/ Emil

Den 2018-09-03 kl. 03:40, skrev Rodrigo Stevaux:
> Thanks for the clarification.
>
> What I am hinting at is, the Csmith project caught many bugs in C 
> compilers by using random testing -- feeding random programs and 
> testing if the optimizations preserved program behavior.
>
> Haskell, having tens of optimizations, could be a potential 
> application of the same technique.
>
> I have no familiarity with the GHC or with any compilers in general; I 
> am just looking for something to study.
>
> My questions in its most direct form is, as in your view, could GHC 
> optimizations hide bugs that could be potentially be revealed by 
> exploring program spaces?
>
> Em dom, 2 de set de 2018 às 16:58, Sven Panne <svenpanne at gmail.com 
> <mailto:svenpanne at gmail.com>> escreveu:
>
>     Am So., 2. Sep. 2018 um 20:05 Uhr schrieb Rodrigo Stevaux
>     <roehst at gmail.com <mailto:roehst at gmail.com>>:
>
>         Hi Omer, thanks for the reply. The tests you run are for
>         regression testing, that is, non-functional aspects, is my
>         understanding right? [...]
>
>
>     Quite the opposite, the usual steps are:
>
>        * A bug is reported.
>        * A regression test is added to GHC's test suite, reproducing
>     the bug
>     (https://ghc.haskell.org/trac/ghc/wiki/Building/RunningTests/Adding).
>        * The bug is fixed.
>
>     This way it is made sure that the bug doesn't come back later. Do
>     this for a few decades, and you have a very comprehensive test
>     suite for functional aspects. :-) The reasoning behind this:
>     Blindly adding tests is wasted effort most of time, because this
>     way you often test things which only very rarely break: Bugs OTOH
>     hint you very concretely at problematic/tricky/complicated parts
>     of your SW.
>
>     Catching increases in runtime/memory consumption is a slightly
>     different story, because you have to come up with "typical"
>     scenarios to make useful comparisons. You can have synthetic
>     scenarios for very specific parts of the compiler, too, like
>     pattern matching with tons of constructors, or using gigantic
>     literals, or type checking deeply nested tricky things, etc., but
>     I am not sure if such things are usually called "regression tests".
>
>     Cheers,
>        S.
>
>
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.



More information about the Haskell-Cafe mailing list