[GHC] #8423: contraint solver doesn't reduce reducible closed type family expressions (even with undecidable instances!)
GHC
ghc-devs at haskell.org
Thu Mar 27 13:05:29 UTC 2014
#8423: contraint solver doesn't reduce reducible closed type family expressions
(even with undecidable instances!)
--------------------------------------------+------------------------------
Reporter: carter | Owner:
Type: feature request | Status: new
Priority: normal | Milestone: 7.10.1
Component: Compiler (Type checker) | Version: 7.7
Resolution: | Keywords:
Operating System: Unknown/Multiple | Architecture:
Type of failure: None/Unknown | Unknown/Multiple
Test Case: | Difficulty: Unknown
Blocking: | Blocked By:
| Related Tickets: #4259
--------------------------------------------+------------------------------
Comment (by goldfire):
After being prodded via email to consider this issue further, I've made a
tiny bit of progress, but nowhere near an actual solution.
It turns out that the closed type family case and the open type family
case can indeed be considered separately. Why? Because the compatibility
check can usefully be turned off in the closed type family case, but not
in the open one. Turning off the compatibility check for closed families
reduces the number of reductions possible, but doesn't threaten soundness.
On the other hand, turning off the compatibility check for unordered, open
families destroys soundness.
So, I thought about this: during the compatibility check, I normalized
both substituted RHSs, but, during the normalization, I ignored
compatibility. This seems like a nice, conservative check, but less
conservative than the current one, requiring coincidence.
It turned out that implementing the check above was non-trivial (it
required delicate ordering among family instances), so I did the following
(even better) thing instead (this is available in the `wip/recurs-compat`
branch):
- The existing compatibility check was left unchanged.
- During type family simplification, an extra check was added, in parallel
with the apartness check. (This follows more closely what is written up in
[http://www.cis.upenn.edu/~eir/papers/2014/axioms/axioms-extended.pdf the
paper].) Say the current, matching equation is `e` and the previous one
under consideration is `d`. Suppose the target is `t`.
- We unify `d` with `t`, possibly producing unifying substitution `ds`.
(If unification fails, the new check fails, and no reduction happens. But,
note that if `d` and `t` are apart, then the apartness check succeeds,
allowing reduction.)
- Let `es` be the substitution (sure to exist) that witnesses that `e`
matches `t`.
- Now, we check if `normalize(ds(d_rhs)) = normalize(ds(es(e_rhs)))`. If
this check succeeds, we allow reduction by equation `e`.
- The `normalize` function here is special: if it is reducing a closed
type family and needs to do a compatibility check, that check ''does not
normalize''. Without this restriction, we would often loop, as
demonstrated below.
There are a few natural questions at this point:
- '''Is this type safe?''' I haven't proved anything. But, I believe that
it is type safe as long as all type families terminate. Why? Because the
type safety proof (appearing in the paper) doesn't seem to be disturbed by
this change. That proof requires only ''local'' confluence, meaning that
it can take an arbitrary number of steps to recombine two terms after they
have diverged. However, I believe that this is '''not''' type-safe in the
presence of non-terminating type families. Why? Because the proof in the
non-terminating case requires a ''local diamond'' property, which requires
single-step recombining after divergence. The normalization step
explicitly throws that out. I don't know of a way to repair the proof, so
I'm led to think that the property is indeed broken. I have no counter-
example, though.
- '''Does it work in practice?''' A bit. Take the `PSum` family in
comment:3. In GHC 7.8, the third equation won't fire because it conflicts
with the second. With the extra check as described here, the third
equation can indeed fire. But, the fourth can't, because the compatibility
check against the third requires normalization internally to work. I don't
see an easy solution to that problem.
- '''What about axioms in Core?''' My work didn't touch axioms. So,
`-dcore-lint` would fail in my branch if the new checks were used. A real
solution here would require changes to Core, essentially to record exactly
how two equations are compatible. Given that the proof of compatibility
(with normalization) might be arbitrarily large, it would seem to require
some new form that is an explicit witness of compatibility. Of course, we
could just bake the normalization step into the Core type-checking
algorithm, but normalization is potentially non-terminating, so doing so
breaks the tenet of "type-checking Core is simple and decidable".
- '''Where to go from here?''' I believe that the core problem is that
we're currently finding some sort of least fixed point of compatibility,
where we really want the greatest fixed point. That is, if we assume that
the third and fourth equations of `PSum` are compatible, then we can prove
they're compatible. This recursive proof would be productive (that is,
there would be a new `'S` constructor), so I think the idea isn't entirely
silly. I haven't worked out any details, though.
- '''What's the looping example?''' Check this out:
{{{
type family F a where
F [a] = F a
F b = b
}}}
This is a simple, terminating type family. Yet, if we try to simplify `F
c` (which should be stuck), a fully recursive compatibility check would
loop, as the check would, in turn, need to simplify `F a`, which is
isomorphic to what we started with.
Given the complications here (especially the thought of how to update
Core), I'm tabling this for now.
--
Ticket URL: <http://ghc.haskell.org/trac/ghc/ticket/8423#comment:10>
GHC <http://www.haskell.org/ghc/>
The Glasgow Haskell Compiler
More information about the ghc-tickets
mailing list