[Haskell-cafe] Testing of GHC extensions & optimizations

Joachim Durchholz jo at durchholz.org
Sun Sep 2 20:43:51 UTC 2018


Am 02.09.2018 um 21:58 schrieb Sven Panne:
> Quite the opposite, the usual steps are:
> 
>     * A bug is reported.
>     * A regression test is added to GHC's test suite, reproducing the 
> bug (https://ghc.haskell.org/trac/ghc/wiki/Building/RunningTests/Adding).
>     * The bug is fixed.
> 
> This way it is made sure that the bug doesn't come back later.

That's just the... non-thinking aspect, and more embarrassment 
avoidance. The first level of automated testing.

 > Do this
> for a few decades, and you have a very comprehensive test suite for 
> functional aspects. :-) The reasoning behind this: Blindly adding tests 
> is wasted effort most of time, because this way you often test things 
> which only very rarely break: Bugs OTOH hint you very concretely at 
> problematic/tricky/complicated parts of your SW.

Well, you have to *think*.
You can't just blindly add tests for every bug that was ever reported; 
you get an every-growing pile of test code, and if the spec changes you 
need to change the tests. So you need a strategy to curate the test 
code, and you very much prefer to test for the thing that actually went 
wrong, not the thing that was reported.

I'm pretty sure the GHC guys do, actually; I'm just speaking up so that 
people don't take this "just add a test whenever a bug occurs" at face 
value, there's much more to it.

> Catching increases in runtime/memory consumption is a slightly different 
> story, because you have to come up with "typical" scenarios to make 
> useful comparisons.

It's just a case where you cannot blindly add a test for every 
performance regression you see, you have to set up testing beforehand.
Which is the exact opposite of what you recommend, so maybe the 
recommendation shouldn't be taken at face value ;-P

 > You can have synthetic scenarios for very specific
> parts of the compiler, too, like pattern matching with tons of 
> constructors, or using gigantic literals, or type checking deeply nested 
> tricky things, etc., but I am not sure if such things are usually called 
> "regression tests".

It's a matter of definition and common usage, but indeed many people 
associate the term "regression testing" with "let's write a test case 
whenever we see a bug".

This is one of the reasons why I prefer the term "automated testing". 
It's both more general and encompasses all the things that one does.

Oh, and sometimes you even add a test blindly due to a bug report. It's 
still a good first line of defense, it's just not what you should always 
do, and never without thinking about an alternative.

Regards,
Jo


More information about the Haskell-Cafe mailing list