TDNR without new operators or syntax changes

Adam Gundry adam at
Tue May 24 08:07:58 UTC 2016

Thanks for starting this discussion! Having spent more time thinking
about record field overloading than perhaps I should, here are some
things to think about...

On 22/05/16 16:01, Jeremy wrote:
> Bertram Felgenhauer-2 wrote
>>> 1. If the compiler encounters a term f a, and there is more than one
>>> definition for f in scope (after following all of the usual rules for
>>> qualified imports);
>>> 2. And exactly one of these definitions matches the type of a (or the
>>> expected type of f if given);
>>> 3. Then this is the definition to use.
>> I now find that Anthony's concerns are justified. The question is, what
>> exactly does the type matching in step 2 involve? If it is a recursive
>> call to the type-checker then you'll have a huge performance problem.
> I was concerned about this, but not knowing anything about how
> type-checking/inference is implemented, I wouldn't know if this is actually
> a problem or not.

Unfortunately this is indeed a problem. When the type-checker encounters
`f a`, it infers the type of `f`, ensures it is a function type `s ->
t`, then checks that `a` has type `s`. But if there are multiple
possible `f`s in scope, what should it do? Options include:

 1. run the entire type-checking process for each possible type of `f`
(this is almost certainly too slow);

 2. give ad-hoc rules to look at type information from the context or
the inferred type of the argument `a`;

 3. introduce a name resolution constraint and defer it to the
constraint solver (the same way that solving for typeclasses works);

 4. require a type signature in a fixed location.

Both 2 and 3 would need careful specification, as they run the risk of
exposing the type-checking algorithm to the user, and changing the
typing rules depending on what is in scope may be problematic. In
particular, if `f` has a higher-rank type than bidirectional type
inference is likely to break down.

The DuplicateRecordFields approach is to make use of bidirectional type
inference (a bit like 2, but without looking at the type of the
argument) and otherwise require a type signature. This is carefully
crafted to avoid needing to change existing typing rules. The wiki page
[1] sets out some of the cases in which this does and doesn't work.

Point 3 is rather like the magic classes in the OverloadedRecordFields
proposal [2] (this isn't in GHC HEAD yet, but an early version is on
Phab [3]).

>> If, on the other hand, you only take into account what is known about
>> the type of a at a given time, then you need special treatment for
>> unambiguous names or even trivial programs will fail to type-check, just
>> as Anthony said.
> Why special treatment for unambiguous names? They shouldn't be effected at
> all by this.

There are some design choices here as well, separately from the options
above. Some possibilities are:

 A. leave unambiguous names alone, and only do the special thing for the
ambiguous ones (this doesn't break existing code, but can lead to
confusing errors if an import is changed to introduce or remove an
ambiguity, especially for options 2 or 3 above);

 B. do the same thing for unambiguous names as ambiguous ones (but this
may well break existing code, and is likely to restrict inference too much);

 C. require an explicit syntactic cue that the name should be treated
specially (e.g. the original TDNR's use of dot, or the # sign in the
OverloadedLabels part of ORF [4]).

As you can see, there are quite a few complex trade-offs here. I suspect
this is at least part of the reason for the slow progress on TDNR-like
extensions. It will be interesting to see how DuplicateRecordFields is

All the best,






Adam Gundry, Haskell Consultant
Well-Typed LLP,

More information about the Glasgow-haskell-users mailing list