Konrad Hinsen
Thu, 28 Aug 2003 16:17:55 +0200

On Thursday 28 August 2003 12:57, Malcolm Wallace wrote:

> The Hat solution is to trace everything, and then use a specialised
> query over the trace to narrow it to just the points you are interested

That sounds risky with programs that treat megabytes of data. It isn't al=
possible to test with small data sets, e.g. when different algorithms are=
used depending on the size of the problem.

> in.  At the moment, the hat-observe browser is the nearest to what you
> want - like HOOD, it permits you to see the arguments and results of
> a named function call, but additionally you can restrict the output

I am mostly interested in intermediate values.

> Another idea is to permit real Haskell expressions to post-process the
> result of the trace query, rather like meta-programming.  So your secon=

That sounds like a very flexible approach. Could one perhaps do this *whi=
the trace is being constructed, in a lazy evaluation fashion, such that=20
unwanted trace data is never generated and stored?

> QuickCheck and Hat can be made to work together nicely.  There is a
> version of QuickCheck in development which works by first running
> the ordinary program with lots of random test data.  If a failure
> is found, it prunes the test case to the minimal failing case, and
> passes that minimal case to a Hat-enabled version of the program,
> which can then be used to investigate the cause of the failure.

That sounds very useful.

Konrad Hinsen                            | E-Mail:
Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-
Rue Charles Sadron                       | Fax:  +33-
45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
France                                   | Nederlands/Francais