[Haskell-cafe] Testing of GHC extensions & optimizations
Sven Panne
svenpanne at gmail.com
Mon Sep 3 06:29:54 UTC 2018
Am So., 2. Sep. 2018 um 22:44 Uhr schrieb Joachim Durchholz <
jo at durchholz.org>:
> That's just the... non-thinking aspect, and more embarrassment
> avoidance. The first level of automated testing.
>
Well, even avoiding embarrassing bugs is extremely valuable. The vast
amount of bugs in real-world SW *is* actually highly embarrassing, and even
worse: Similar bugs have probably been introduced before. Getting some
tricky algorithm wrong is the exception, at least for two reasons: The
majority of code is typically very mundane and boring, and people are
usually more awake and concentrated when they know that they are writing
non-trivial stuff. Of course your mileage varies, depending on the domain,
experience of programmers, deadline pressure, etc.
> > Do this
> > for a few decades, and you have a very comprehensive test suite for
> > functional aspects. :-) The reasoning behind this: Blindly adding tests
> > is wasted effort most of time, because this way you often test things
> > which only very rarely break: Bugs OTOH hint you very concretely at
> > problematic/tricky/complicated parts of your SW.
>
> Well, you have to *think*.
> You can't just blindly add tests for every bug that was ever reported;
> you get an every-growing pile of test code, and if the spec changes you
> need to change the tests. So you need a strategy to curate the test
> code, and you very much prefer to test for the thing that actually went
> wrong, not the thing that was reported.
>
Two things here: I never proposed to add the exact code from the bug report
to a test suite. Bug reports are ususally too big and too unspecific, so of
course you add a minimal, focused test triggering the buggy behavior.
Furthermore: If the spec changes, your tests *must* break, by all means,
otherwise: What are the tests actually testing if it's not the spec? Of
course only those tests should break which test the changed part of the
spec.
> It's just a case where you cannot blindly add a test for every
> performance regression you see, you have to set up testing beforehand.
> Which is the exact opposite of what you recommend, so maybe the
> recommendation shouldn't be taken at face value ;-P
>
This is exactly why I said that these tests are a different story. For
performance measurements there is no binary "failed" or "correct" outcome,
because typically many tradeoffs are involved (space vs. time etc.).
Therefore you have to define what you consider important, measure that, and
guard it against regressions.
It's a matter of definition and common usage, but indeed many people
> associate the term "regression testing" with "let's write a test case
> whenever we see a bug". [...]
>
This sounds far too disparaging, and a quite a few companies have a rule
like "no bug fix gets committed without an accompanying regression test"
for a good reason. People usually have no real clue where their most
problematic code is (just like they have no clue where the most
performance-critical part is), so having *some* hint (bug report) is far
better than guessing without any hint.
Cheers,
S.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/haskell-cafe/attachments/20180903/298309af/attachment.html>
More information about the Haskell-Cafe
mailing list