<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>
</p>
<p>Hi,</p>
<p>I've been working on implementing the Haskell type system for my
specialized Haskell interpreter. I have now constructed a system
that can type-check and run Haskell code that contains explicit
type signatures, type constraints, and arbitrary-rank types.</p>
<p>I'm now thinking that I may need to implement GADTs -- i.e.
constructors that introduce local constraints, including equality
constraints. I'm looking at the paper "OutsideIn(X): Modular type
inference with local assumptions" from 2011. I have three
questions about implementing GADTs -- I'd be grateful for answers
to any of them.<br>
</p>
<p><br>
</p>
<p>QUESTION 1: Are there any obviously important resources that I've
overlooked?</p>
<p>The 2011 OutsideIn paper mentions several previous papers that
seem quite helpful:</p>
<p>* Peyton Jones el at 2006. Simple Unification-based type
inference for GADTs<br>
</p>
<p>* Schrijvers etal 2007. Towards open type functions for Haskell<br>
</p>
<p>* Peyton Jones et al 2004. Wobbly Types: etc.<br>
</p>
<p>* Schrijvers et al 2008. Type checking with open type functions.<br>
</p>
<p>* Shrijvers et al 2009. Complete and decidable type inference for
GADTs<br>
</p>
<p>* Vytiniotis et al 2010. Let should not be generalized.</p>
<p>And of course the GHC source code. (I'm not looking at coercions
at the present time, because my type-checker translates to the
plain lambda calculus without type annotations, not system F or
F_C. Hopefully I can remedy this later...)<br>
</p>
<p><br>
</p>
<p>QUESTION 2: if my quick scan is correct, none of the papers
mention the GHC technique of determining untouchability by
assigning "levels" to type variables. Is there any written paper
(outside the GHC sources) that discusses type levels?</p>
<p><br>
</p>
<p>QUESTION 3: My impression is that: <br>
</p>
<p>(a) type variable levels were introduced in order to clarify
which MetaTyVars are "untouchable", but</p>
<p>(b) levels now also check that type variables do not escape their
quantification scope.</p>
<p>(c) levels can also be used to figure out which variables are
free in the type environment, and therefore should not be
generalized over.<br>
</p>
<p>Does this sound right? I suspect that I might be wrong about the
last one...</p>
<p><br>
</p>
<p>Thanks again, and sorry for the long e-mail.<br>
</p>
-BenRI<br>
<p><br>
</p>
<div class="moz-cite-prefix">On 1/18/22 8:55 PM, Benjamin Redelings
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:0251fb0b-b4b2-d91b-ca34-55e3b812e456@gmail.com">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<p>Hi,</p>
<p>1. I think I have clarified my problem a bit. It is actually
not related to pattern bindings. Here's an example:</p>
<pre>h = let f c i = if i > 10 then c else g c 'b'
g 'a' w = f 'b' 10
g z w = z
in (f 'a' (1::Int), f 'a' (1.0::Double))
</pre>
<div class="moz-cite-prefix">
<p>If I am understanding the Haskell type system correctly, <br>
</p>
<p>* the definitions of f and g form a recursive group</p>
<p>* the monomorphism restriction is not invoked<br>
</p>
<p>* the inner binding (to f and g) leads to a local value
environment (LVE):<br>
</p>
<pre>{ f :: Char -> a -> Char; g :: Char -> Char -> Char }
</pre>
<p>with predicates (Num a, Ord a)<br>
</p>
<p>2. In this situation, Typing Haskell in Haskell suggests that
we should NOT apply the predicates to the environment because
the type for g does not contain 'a', and would become
ambiguous (section 11.6.2). Instead, we should only apply
predicates to the environment if their type variables are
present in ALL types in the current declaration group.</p>
<p>Since the predicates (Num a, and Ord a) are not retained,
then we cannot quantify over a.</p>
<p>It seems like this should make `f` monomorphic in a, and thus
we should not be able apply (f 'a') to both (1::Int) and
(1::Double).</p>
<p>Does that make any sense?<br>
</p>
<p>3. GHC, however, compiles this just fine. However, if I add
"default ()", then it no longer compiles.</p>
<p>4. On further reflection, Typing Haskell in Haskell applies
defaulting rules when evaluating each binding, and not just at
the top level. So this might be part of where my code is
going wrong.<br>
</p>
<p>-BenRI<br>
</p>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">On 1/15/22 11:09 AM, Benjamin
Redelings wrote:<br>
</div>
<blockquote type="cite"
cite="mid:ea6cf444-b311-c9aa-2ccb-6e60ef6f5edd@gmail.com">Hi, <br>
<br>
1. I'm reading "A Static semantics for Haskell" and trying to
code it up. I came across some odd behavior with pattern
bindings, and I was wondering if someone could explain or point
me in the right direction. <br>
<br>
Suppose you have the declaration <br>
<br>
(x,y) = ('a',2) <br>
<br>
My current code is yielding: <br>
<br>
x :: Num a => Char <br>
<br>
y :: Num a => a <br>
<br>
However, I notice that ghci gives x the type Char, with no
constraint, which is what I would expect. It also gives y the
type 'Num b => b', so I don't think it is defaulting a to Int
here. <br>
<br>
The weird results from my code stem from rule BIND-PRED in
Figure 15 of
<a class="moz-txt-link-freetext"
href="https://homepages.inf.ed.ac.uk/wadler/papers/staticsemantics/static-semantics.ps"
moz-do-not-send="true">https://homepages.inf.ed.ac.uk/wadler/papers/staticsemantics/static-semantics.ps</a><br>
<br>
E |- bind ~~> \dicts : theta -> monobinds in bind :
(LIE_{encl}, theta => LVE) <br>
<br>
Here theta = ( Num a ) and LVE = { x :: Char, y :: a }. So,
theta => LVE is <br>
<br>
{ x :: Num a => Char, y :: Num a => a } <br>
<br>
The obvious thing to do is avoid changing a type T to Num a
=> T if T does not contain a. Also I'm not totally sure if
that trick gets to the bottom of the issue. However, the paper
doesn't mention define theta => LVE that way. Is there
something else I should read on this? <br>
<br>
2. If we just chop out predicates which mention variables not in
the type ( == ambiguous predicates?) then I'm not totally sure
how to create code for this. <br>
<br>
I would imagine that we have something like <br>
<br>
tup dn = ('a', fromInteger dn 2) <br>
<br>
x = case (tup dn) of (x,y) -> x <br>
<br>
y dn case (tup dn) of (x,y) -> y <br>
<br>
In this case its not clear where to get the `dn` argument of
`tup` from, in the definition of `x`. Can we pass in
`undefined`? Should we do something else? <br>
<br>
If anyone can shed light on this, I would be grateful :-) <br>
<br>
-BenRI <br>
<br>
</blockquote>
</blockquote>
</body>
</html>