[Haskell-cafe] Developing Programs and Proofs Spontaneously using GADT

Dan Licata drl at cs.cmu.edu
Sat Aug 4 16:59:29 EDT 2007


Another way of understanding the terminology is this:

A "dependent product" type (usually written \Pi x:A.B) can be thought of
as a big product of the types (B[a1/x], ..., B[aN/x]) for the
inhabitants a1...aN of the type A.  To introduce a dependent product, you
have to provide each component of the tuple.  To eliminate a dependent
product, you can project out one component.

A "dependent sum" type (usually written \Sigma x:A.B) can be thought of
as a big sum of the types B[a1/x] + ... + B[aN/x], where again a1 .. aN
are all the inhabitants of A.  To introduce a dependent sum, you pick a
particular branch of the sum (by choosing an a_i) and inject a value of
that type (i.e., B[a_i/x]).  To eliminate a dependent sum, you can see
what branch a value is in (i.e., recover the a_i) and you can see what
the associated data is.

This terminology takes a little getting used to because of how we
usually present these intro and elim forms syntactically.  Consider
dependent products.  For finite products (e.g., (Int, String)) we
introduce a product just by writing down an inhabitant of each component
type.  However, the dependent product needs one component for each
inhabitant of the type A, so we can't in general list them all
explicitly.  Thus, the usual intro form for a dependent product is to
give an inhabitant of B for a generic inhabitant x of A (i.e., a
variable x:A).  So the intro form is a function.  On the other hand, to
eliminate a finite product we usually say "I'd like the 2nd component,
please".  But for a dependent product, you have to say "I'd like the
a_i^th component", so the elim is application.

Sums are similar.  For a finite sum, we usually say "this data is in the
second branch", but for a dependent sum, we have to say "this data is in
the a_i^th branch".  So the intro form is a pair of a tag a_i and a
value of type B[a_i/x].  On the other hand, to eliminate a finite sum,
we usually do a case with one branch for each of the tags, where in each
branch we get the tagged value.  To eliminate a dependent sum, we cover
all those cases at once by getting (1) a generic tag x:A and (2) a value
of the type associated with that tag.  But this is exactly a

 split (x,y) = e1 in e2

pattern-matching style elimination form for a pair.  

So, dependent products are represented by functions, and dependent sums
are represented by pairs.  It's confusing at first, but it's established
terminology.  I tend to use "dependent function" and "dependent pair"
myself, since sometimes people use "dependent product" to mean \Sigma
rather than \Pi (understandable given the intro and elim for \Sigma!),
so I sometimes find it ambiguous.

Does that make sense?
-Dan


On Aug04, Derek Elkins wrote:
> On Sat, 2007-08-04 at 23:25 +0800, Shin-Cheng Mu wrote:
> > 
> [...]
> > In Haskell, the existential quantifier is mimicked by forall.
> > We define:
> > 
> >  > data DepSum a p = forall i . DepSum (a i) (p i)
> > 
> > The term "dependent sum" is borrowed from the Omega tutorial
> > of Sheard, Hook, and Linger [2] (why is it called a sum,
> > not a product?).
> 
> There is already a dependent product.  One compelling reason for this
> naming comes from category theory and how they relate to each other (the
> dependent sum is a left adjoint, the dependent product is a right
> adjoint [not directly to each other].)  They are related in the same way
> products and sums are related, or universal quantification and
> existential quantification are.  So, looking at the product case
> (A,B)=AxB (and ()=1) and the sum case Either A B = A+B we can see how
> they generalize to be the above.
> 
> Let's say we have a pair of the same type.  We can clearly represent it
> as AxA, but another alternative is 1+1->A.  We can write the isomorphism
> in Haskell:
> pToF :: (a,a) -> (Either () () -> a)
> pToF (x,_) (Left  ()) = x
> pToF (_,y) (Right ()) = y
> 
> fToP :: (Either () () -> a) -> (a,a)
> fToP f = (f (Left ()), f (Right ()))
> 
> Similarly for sums, A+A can be written (1+1)xA.  Again, the Haskell:
> sToP :: Either a a -> (Either () (),a)
> sToP (Left  x) = (Left  (), x)
> sToP (Right x) = (Right (), x)
> 
> pToS :: (Either () (), a) -> Either a a
> pToS (Left  (), x) = Left x
> pToS (Right (), x) = Right x
> 
> Incidentally, this is how sums are usually encoded, i.e. a pair of a tag
> and a value.  (Also incidentally, these are almost exactly the
> isomorphisms defining adjunctions related to the ones that define
> product and sum, i.e. the above definitions immediately fall out of the
> category theory.)
> 
> So how to represent AxB or A+B with this encoding?  You can't.  However,
> in a dependently typed language we can write (in some dependently typed
> pseudo-Haskell) [for concreteness, let's set A=Int, B=String]:
> f :: Either () () -> Type
> f (Left  ()) = Int
> f (Right ()) = String
> 
> Now we use the same definitions as above only with the types,
> pToF :: (Int,String) -> (s :: Either () () -> f s)
> fToP :: (s :: Either () () -> f s) -> (Int,String)
> sToP :: Either Int String -> s :: Either () (). f s
> pToS :: (s :: Either () (). f s) -> Either Int String
> 
> This leads to the dependent product being a function space when there is
> no dependency, and similarly the dependent sum being a (normal) product.
> 
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
> 


More information about the Haskell-Cafe mailing list