deriving over renamed types
C T McBride
Tue, 9 Apr 2002 12:53:16 +0100 (BST)
> I was referring to the expression language.
> So this is already allowed:
> f :: (forall a . a -> a) -> (b, c) -> (c ,b )
> f i (x,y) = (i y, i x)
> I'd like to be able to have explicit type applications.
> If we denote type application with infix # I'd like to write
> f i (x, y) = (i#c y, i#b x)
> And where you invoke f you could use a type lambda
> ... f (/\a -> \ x -> (x::a)) ...
> It doesn't make much sense in this example, but there are others
> where the implcit stuff just doesn't allow you to do what you want.
There's a plentiful supply of good examples arising naturally from
higher-hind polymorphism. * -> * is full of functions with good
properties (eg being functorial or monadic) which aren't datatype
constructors. Haskell's inability to express them and to instantiate type
schemes with them is a serious hindrance to the good functional
programming practice of identifying and abstracting common structure.
Similarly, it's not too hard to find examples where the potential
of local `forall' is thwarted by the lack of the corresponding lambda.
These examples are already in circulation. Check out...
Richard Bird and Ross Paterson:
de Bruijn notation as a nested datatype
Generalized folds for nested datatypes
and there are plenty more. These papers are too long, because they
contain the same program written several times with different types.
The abstraction which combines them just isn't available.
Worse, multi-parameter classes contribute a second source of `functions
from * to *' which just don't fit in with higher-kind polymorphism.
Some rationalization would help. Type theory has already addressed lots
of these issues. I'm not making these proposals in a vacuum. Type-level
abstraction and application, with defaults computed by unification but
explicit overriding available is standard practice in several
> > data Zero;
> > data Succ n;
> > type Add Zero b = b;
> > type Add (Succ a) b = Succ (Add a b);
...you can write this program with type classes, but much less clearly
and directly. They're a good tool, but the wrong tool for this job, and
but this job is worth doing well.
In that respect, I'd caution against pattern matching over * and
higher-kinds, because these kinds are not closed. That's why I argue
in favour of datakinds (given that using genuine term-level data would
involve a seismic shift in Haskell's static/dynamic divide).
But, imagining that Zero and Suc are the constructors of a datakind
Nat over which we can write these programs safely...
> > data Foo f = MkFoo (f ());
> > type Succ' = Succ;
> > type Succ'' n = Succ n;
> > -- which of these types are the same?
> > f1 = MkFoo undefined :: Foo Succ;
> > f2 = MkFoo undefined :: Foo Succ';
> > f3 = MkFoo undefined :: Foo Succ'';
> > f4 = MkFoo undefined :: Foo (Add (Succ Zero));
...as has already been pointed out, they all have the same eta-long normal
form, being Foo (/\n -> Succ n).
Machines are good at figuring this stuff out, which is why I like their
help when I'm programming. Working with types which contain computations,
it's a great help if you can typecheck your code as you go, and ask for
(head) normal forms anytime.
This stuff doesn't require a leap in the dark. It just takes a bit
of borrowing and adaptation.