[Haskell-cafe] What's the motivation for η rules?
Conor McBride
conor at strictlypositive.org
Thu Dec 30 15:54:44 CET 2010
Hi
Thus invoked...
On 28 Dec 2010, at 23:29, Luke Palmer wrote:
> Eta conversion corresponds to extensionality; i.e. there is nothing
> more to a function than what it does to its argument.
>
> Suppose f x = g x for all x. Then using eta conversion:
>
> f = (\x. f x) = (\x. g x) = g
>
> Without eta this is not possible to prove. It would be possible for
> two functions to be distinct (well, not provably so) even if they do
> the same thing to every argument -- say if they had different
> performance characteristics. Eta is a "controversial" rule of lambda
> calculus -- sometimes it is omitted, for example, Coq does not use it.
> It tends to make things more difficult for the compiler -- I think
> Conor McBride is the local expert on that subject.
...I suppose I might say something.
The motivation for various conversion rules depends quite a lot on one's
circumstances. If the primary concern is run-time computation, then
beta-rules (elimination construct consumes constructor) and definitional
expansion (sometimes "delta"), if you have definition, should do all the
work you need. I'm just wondering how to describe such a need. How about
this property (reminiscent of some results by Herman Geuvers).
Let = be the conversion relation, with whatever rules you've chucked in,
and let --> be beta+delta reduction, with -->* its reflexive-transitive
closure. Suppose some closed term inhabiting a datatype is convertible
with a constructor form
t = C s1 .. sn
then we should hope that
t -->* C r1 .. rn with ri = si, for i in 1..n
That is: you shouldn't need to do anything clever (computing backwards,
eta-conversion) to get a head-normal form from a term which is kind
enough to have one. If this property holds, then the compiler need only
deliver the beta-delta behaviour of your code. Hurrah!
So why would we ever want eta-rules? Adding eta to an *evaluator* is
tedious, expensive, and usually not needed in order to deliver values.
However, we might want to reason about programs, perhaps for purposes
of optimization. Dependent type theories have programs in types, and
so require some notion of when it is safe to consider open terms equal
in order to say when types match: it's interesting to see how far one
can chuck eta into equality without losing decidability of conversion,
messing up the "Geuvers property", or breaking type-preservation.
It's a minefield, so tread carefully. There are all sorts of bad
interactions, e.g. with subtyping (if -> subtyping is too weak,
(\x -> f x) can have more types than f), with strictness (if
p = (fst p, snd p), then (case p of (x, y) -> True) = True, which
breaks the Geuvers property on open terms), with reduction (there
is no good way to orientate the unit type eta-rule, u = (), in a
system of untyped reduction rules).
But the news is not all bad. It is possible to add some eta-rules
without breaking the Geuvers property (for functions it's ok; for
pairs and unit it's ok if you make their patterns irrefutable). You
can then decide the beta-eta theory by postprocessing beta-normal
forms with type-directed eta-expansion (or some equivalent
type-directed trick). Epigram 2 has eta for functions, pairs,
and logical propositions (seen as types with proofs as their
indistinguishable inhabitants). I've spent a lot of time banging my
head off these issues: my head has a lot of dents, but so have the
issues.
So, indeed, eta-rules make conversion more extensional, which is
unimportant for closed computation, but useful for reasoning and for
comparing open terms. It's a fascinating, maddening game trying to
add extensionality to conversion while keeping it decidable and
ensuring that open computation is not too strict to deliver values.
Hoping this is useful, suspecting that it's TMI
Conor
More information about the Haskell-Cafe
mailing list