[Haskell-cafe] Type System vs Test Driven Development
Evan Laforge
qdunkan at gmail.com
Thu Jan 6 03:26:52 CET 2011
On Wed, Jan 5, 2011 at 1:27 PM, Gregory Collins <greg at gregorycollins.net> wrote:
> On Wed, Jan 5, 2011 at 9:02 PM, Jonathan Geddes
> <geddes.jonathan at gmail.com> wrote:
>
>> Despite all this, I suspect that since Haskell is at a higher level of
>> abstraction than other languages, the tests in Haskell must be at a
>> correspondingly higher level than the tests in other languages. I can
>> see that such tests would give great benefits to the development
>> process. I am convinced that I should try to write such tests. But I
>> still think that Haskell makes a huge class of tests unnecessary.
I write plenty of tests. Where static typing helps is that of course
I don't write tests for type errors, and more things are type errors
than might be in other languages (such as incomplete cases). But I
write plenty of tests to verify high level relations: with this input,
I expect this kind of output.
A cheap analogue to "test driven" that I often do is "type driven", I
write down the types and functions with the hard bits filled in with
'undefined'. Then I :reload the module until it typechecks. Then I
write tests against the hard bits, and run the test in ghci until it
passes.
However:
> QuickCheck especially is great because it automates this tedious work:
> it fuzzes out the input for you and you get to think in terms of
> higher-level invariants when testing your code. Since about six months
> ago with the introduction of JUnit XML support in test-framework, we
> also have plug-in instrumentation support with continuous integration
> tools like Hudson:
Incidentally, I've never been able to figure out how to use
QuickCheck. Maybe it has more to do with my particular app, but
QuickCheck seems to expect simple input data and simple properties
that should hold relating the input and output, and in my experience
that's almost never true. For instance, I want to ascertain that a
function is true for "compatible" signals and false for "incompatible"
ones, where the definition of compatible is quirky and complex. I can
make quickcheck generate lots of random signals, but to make sure the
"compatible" is right means reimplementing the "compatible" function.
Or I just pick a few example inputs and expected outputs. To get
abstract enough that I'm not simply reimplementing the function under
test, I have to move to a higher level, and say that notes that have
incompatible signals should be distributed among synthesizers so they
don't make each other sound funny. But now it's too high level: I
need a definition of "sound funny" and a model of a synthesizer... way
too much work and it's fuzzy anyway. And at this level the input data
is complex enough that I'd have to spend a lot of time writing and
tweaking (and testing!) the data generator to verify it's covering the
part of the state space I want to verify.
I keep trying to think of ways to use QuickCheck, and keep failing.
In my experience, the main work of testing devolves to a library of
functions to create the input data, occasionally very complex, and a
library of functions to extract the interesting bits from the output
data, which is often also very complex. Then it's just a matter of
'equal (extract (function (generate input data))) "abstract
representation of output data"'. This is how I do testing in python
too, so I don't think it's particularly haskell-specific.
I initially tried to use the test-framework stuff and HUnit, but for
some reason it was really complicated and confusing to me, so I gave
up and wrote my own that just runs all functions starting with
'test_'. It means I don't get to use the fancy tools, but I'm not
sure I need them. A standard profile output to go into a tool to draw
some nice graphs of performance after each commit would be nice
though, surely there is such a thing out there?
More information about the Haskell-Cafe
mailing list