[Haskell-cafe] Strict type system allows for a maximum number of programming errors to be caught at compile time.

Richard O'Keefe ok at cs.otago.ac.nz
Mon May 3 18:47:16 EDT 2010


On May 4, 2010, at 5:07 AM, Kyle Murphy wrote:
>
> To look at the flip side of your statistics:
> Compiler says my code is right => My code is actually wrong -- 40%
> Compiler says my code is wrong => My code is actually right -- 5%
>
> I'd argue this is provably wrong, as correct code by definition  
> would compile.

Only for some definitions of "correct".
It was clearly explained in the original paper introducing
Hindley-Milner type checking that some *correct* programs
(in the sense that they would not "go wrong" at run time
and would deliver the intended results) would be rejected by
such a type checker.  To give a Haskell example,

	main = print $ (head [1,'x'] + 1)

cannot possibly go wrong, yet Haskell will reject it.

But that's not the argument for dynamic types.
The argument for dynamic types is adaptability.
This is why Brad Cox designed Objective C the way he did,
KNOWING that there was a price in efficiency and a price
in static checking:  to write programs that could be extended
more easily.
This is why Joe Armstrong designed Erlang the way he did,
KNOWING that there was a price in efficiency and a price
in static checking:  so that systems could dynamically load
new modules and even replace existing ones, so that systems
could be upgraded without being shut down.
(The Smalltalk term for this is "changing the engine while
driving down the highway at 60mph", but sometimes shutdowns
really are a very bad thing.)

It's interesting that after Brad Cox's original design,
Objective C was extended in a way that permitted more
type checking.  It's interesting that Erlang, after several
experiments, has finally got a type system that seems to
have stuck (possibly because it serves three purposes:
documentation in ErlDoc, performance in the native code
compiler HiPE, and static checking in the Dialyzer), but
it remains optional.

For about a year I kept a log of errors in the Smalltalk code
I was writing, and type errors were in fact quite rare.
Errors due to partial functions being applied to arguments
outside their domain were more common.

When people have this argument, they are using thinking in
terms of type systems like (say) Java.  A type system that
only stops you doing bad things is useful, but it's hard to
love.  The great thing about the Haskell type system is that
it does far more than that.  It's at least plausible that
the Haskell type system reduces errors by reducing the
amount of code you have to write.

To give just one example:  the original QuickCheck for Haskell
was admittedly a brilliant and innovative idea, but the actual
*code* needed to make it happen wasn't that bulky, or even that
hard to understand given the core idea: that random data
generation could be driven by the types.  In contrast, the
QuickCheck that has been written for Erlang is a commercial
product in which we are advised in strong terms that the
"generators" for random data are NOT types, and writing new
ones is, while not *too* complicated, something the programmer
always has to do instead of sometimes letting it be automatic.

Various kinds of generic programming for Haskell take this idea
of letting the types drive automatic programming still further.

ML types => theorems for free;
Haskell types => code for free!



More information about the Haskell-Cafe mailing list