[GHC] #8503: New GeneralizedNewtypeDeriving check still isn't permissive enough

GHC ghc-devs at haskell.org
Tue Nov 19 08:23:02 UTC 2013


#8503: New GeneralizedNewtypeDeriving check still isn't permissive enough
-------------------------------------+------------------------------------
        Reporter:  goldfire          |            Owner:  goldfire
            Type:  bug               |           Status:  new
        Priority:  normal            |        Milestone:
       Component:  Compiler          |          Version:  7.7
      Resolution:                    |         Keywords:
Operating System:  Unknown/Multiple  |     Architecture:  Unknown/Multiple
 Type of failure:  None/Unknown      |       Difficulty:  Unknown
       Test Case:                    |       Blocked By:
        Blocking:                    |  Related Tickets:
-------------------------------------+------------------------------------
Changes (by simonpj):

 * cc: diatchki (added)


Comment:

 Let's note that even with `Fix` there is a legitimate and useful coercion
 `Coercible (Fix (Either x)) (Either x (Fix (Either x)))`.   We can write
 it by hand, and you might want to get from the `Fix` form to the `Either`
 form.  So the same instance declaration may terminate when solving some
 constraints, but not for others.

 The constraint solver already stores a "depth" in each constraint.  The
 depth is incremented by 1 each time you use an instance declaration.  For
 example, when you use `instance Eq a => Eq [a]` to solve `d1:Eq [Int]`, by
 reducing it to `d2:Eq Int`, then `d2` has a depth one greater than `d1`.
 Since such instance declarations remove a type constructor, a deep
 recursion implies a deeply nested type, like `[[[[[[Int]]]]]`.  So
 (proposal) maybe we can simply depth-bound the solver.  In fact it already
 is; this is the `-fcontext-stack` flag.

 (Actually, in fact I fear that we may instead construct a recursive
 dictionary instead, which we probably do not want for newtypes, though we
 do for data types, because we'll see a constraint we have already solved.
 See [http://research.microsoft.com/en-
 us/um/people/simonpj/papers/hmap/index.htm Scrap your boilerplate with
 class] for why we want recursive dictionaries.)

 Here is one other idea.  Suppose we have the wanted constraint `Coercible
 [alpha] [Int]`.  Should we use the `Coercible a b => Coercible [a] [b]`
 instance to solve it?  Well, if it turns out that `alpha` (a unification
 variable) ultimately is `Int`, then doing so would not be wrong, but it
 would be a waste because the identity coercion will do the job.  So maybe
 this is a bit like the `Equal` type family in the
 [http://research.microsoft.com/~simonpj/papers/ext-f Closed type families]
 paper: we should not do the list/list reduction unless the argument types
 are "apart" (in the terminology of the paper).

 But that would be too strong.  Consider
 {{{
 f :: Coercible a b => [a] -> [b]
 f x = coerce x
 }}}
 This should work, but it requires use of that list/list instance, and we
 don't know that `a` and `b` are un-equal.  It's just that in this context
 we can treat `a` and `b` as skolems and hence "apart" for this purpose.
 Except, even then it's not quite right: we could have
 {{{
 f :: (a~b, Coercible a b) => [a] -> [b]
 }}}
 and now `a` and `b` are provably equal.  (Or some GADT version thereof.)
 So I think we probably need the depth-bound thing too.

 Do any of these ideas make sense to you guys? I'll add Iavor to the cc
 list.

--
Ticket URL: <http://ghc.haskell.org/trac/ghc/ticket/8503#comment:17>
GHC <http://www.haskell.org/ghc/>
The Glasgow Haskell Compiler


More information about the ghc-tickets mailing list