Any remaining test patches?

Duncan Coutts duncan.coutts at
Mon May 23 16:19:16 CEST 2011

On Mon, 2011-05-23 at 15:39 +0200, Johan Tibell wrote:
> On Sat, May 21, 2011 at 4:20 PM, Duncan Coutts
> <duncan.coutts at> wrote:
> > Here's the equivalent bit of my design (the TestResult is the same):
> >
> > data TestInstance
> >   = TestInstance {
> >       run            :: IO TestResult
> >       name           :: String,
> >
> >       concurrentSafe :: Bool,
> >       expectedFail   :: Bool,
> >
> >       options        :: [OptionDescr]
> >       setOption      :: String -> String -> Either String TestInstance
> >     }
> I cannot think of a straightforward way to implement setOption in the
> above design. One would have to "store" options in the run closure.

data MyTestType = MyTestType (IO Bool) FooOption

myTestType :: MyTestType -> TestInstance
myTestType = myTestType' defaultFooOption
    myTestType' foo =
      emptyTestInstance {
        run  = fmap convertTesult (runMyTest foo),
        options = [optionDescr "foo" OptionString]
        setOption = \name val ->
          case name of
            "foo" -> Right (myTestType' (parseAsFooOption val))

The function myTestType' is the TestInstance closure with all the
private parameters/fields exposed.

Hurrah for abstraction via lambdas. BTW, this is not exotic. It's a
standard "OO in FP" abstraction technique that we don't use enough.

> A type class approach would allow the test framework to use extra fields
> in the record that implements the type class to store the options e.g.

It's more or less the same except that using lambdas/closures means we
do not need an existential type wrapper. It's H98.

Note also that only the framework implementers need to provide this
interface so we will not confuse casual users with this OO style.

> I prefer "exclusive" to "concurrentSafe", as there might be tests that
> are concurrency safe but should still be run in isolation. Not a big
> difference in practice though.
> Do we really need expectedFail, it feels quite GHC specific and there
> are other options, like commenting out the test or using a tags
> mechanism (see my reply to your other email).

Those were just suggestions. I'm not totally wedded to them.

So the first, what property are we asking test authors to declare?
Whether the test can be run concurrently with others or whether it must
be run in isolation. I think we actually mean the same thing here, just
expressing it as a positive (safe to run this test concurrently with
others) or as a negative (must run this test exclusively, not when any
others are running). We just need to pick something that is clear and
document it properly.

> Here are some other attributes me might want to consider:
> * size - How long is this test expected to take? You might want to run
> all small and medium size tests on every commit but reserve large and
> huge tests for before a release.
> * timeout - The maximum amount of time the test is expected to run.
> The test agent should kill the test after this time has passed.
> Timeout could get a default value based on size. Test agents should
> probably apply some sort of timeout even if we don't let users specify
> it on a per test basis.

So in your other email you suggest a simple attribute system where we
use a set of named tags, but with no meanings that a generic test agent
will know about, just to be used as way for users to filter on tests.

Then here you've got a few suggestions for attributes with particular
meanings to the test agent. Perhaps that kind of combination is enough,
and we don't need anything to declare any kind of meaning. I think it's
probably worth thinking about this part a bit more though.


More information about the cabal-devel mailing list