higher-kind deriving ... or not

C T McBride C.T.McBride@durham.ac.uk
Wed, 27 Feb 2002 12:18:17 +0000 (GMT)

Hi again

On Wed, 27 Feb 2002, Tom Pledger wrote:

> Yes, there's a vicious circle in context reduction, between Wonky Copy
> and Copy (Wonky Copy).
>  | I don't want to start an argument about how to solve this problem.  I
>  | do want to suggest that, for the time being, it would be better to
>  | reject `deriving Show' for type constructors like Wonky (ie those with
>  | higher-kind parameters) than to generate instances which break the
>  | compiler.
>  | 
>  | Or am I just being a spoilsport?
> It depends on your definition of sport.  ;-)
> > data Sport f
> >   = Sport
> >   | Association (f Bool)
> >   deriving Show
> > test = show (Sport :: Sport Maybe)

Fair point, but this is just a thinly disguised first-order type

  type Sport f = Maybe (f Bool)

Correspondingly, it's fine just to check Show for the actual instance
which is used.

In effect, some higher-kind parameters to datatypes are unnecessary
because all their usages can abstracted as type parameters, and the
corresponding applications of f passed in as arguments, just as above. 

However, once you introduce a fixpoint, this abstraction is no longer

> data Fix f = Fix (f (Fix f))

There's no equivalent first-order definition. This is where
higher-kind parameters actually buy us extra stuff, and it's also the
point at which the first-order constraint for show becomes hopeless.
Perhaps banning such derivings for all higher-kind parametric
datatypes is being a bit of a spoilsport: we can allow it exactly
where it isn't necessary!

Another interesting aspect of Tom's example is that the show instance
exists in practice exactly because
  (i)  Show Bool
  (ii) Show a => Show (f a)  -- when f is Maybe

These are the properties expressed by the relevant instance declarations. 
They are strictly stronger than Show (f Bool), but it takes a pretty
bizarre f to make the distinction. Unfortunately, although we can express
(ii) as a property, we can't demand it as a property, because constraints
are first-order. If we could, the problem with fixpoints would go away,
but instance inference would get even more complex than it already is in
post 98 Haskell. 

There's a real question here: at what point does a problem become too
complex for us to accept automation as the only means of its solution? 

It's clear that with typing problems, inference becomes unsustainable
pretty soon after you leave the safe harbours of the Hindley-Milner
system. However, lots of lovely programs have more interesting types:
it would be very frustrating if Haskell forbade these programs just
because their types were not inferrable---not least since, for these
jobs, we usually do think of the type first and the code
afterwards. Sensibly, Haskell allows us to write these types down, so
the machine's task is merely checking. This hybrid approach preserves
type inference for `old' code, whilst promoting more adventurous
programming by allowing us to say what we mean when the machine is too
dim to guess.

Could there be a corresponding hybrid approach for instance inference?