[Haskell-cafe] Data structure containing elements which are instances of the same type class
wren ng thornton
wren at freegeek.org
Sun Aug 19 05:24:57 CEST 2012
On 8/17/12 12:54 AM, Alexander Solla wrote:
> On Thu, Aug 16, 2012 at 8:07 PM, wren ng thornton <wren at freegeek.org> wrote:
>> Though bear in mind we're discussing second-order quantification here, not
> Can you expand on what you mean here? I don't see two kinds of
> quantification in the type language (at least, reflexively, in the context
> of what we're discussing). In particular, I don't see how to quantify over
> predicates for (or sets of, via the extensions of the predicates) types.
> Is Haskell's 'forall' doing double duty?
Nope, it's the "forall" of mathematics doing double duty :)
Whenever doing quantification, there's always some domain being
quantified over, though all too often that domain is left implicit;
whence lots of confusion over the years. And, of course, there's the
scope of the quantification, and the entire expression. For example,
consider the expression:
forall a. P(a)
The three important collections to bear in mind are:
(1) the collection of things a ranges over
(2) the collection of things P(a) belongs to
(3) the collection of things forall a. P(a) belongs to
So if we think of P as a function from (1) to (2), and name the space of
such functions (1->2), then we can think of the quantifier as a function
from (1->2) to (3).
When you talk to logicians about quantifiers, the first thing they think
of is so-called "first-order" quantification. That is, given the above
expression, they think that the thing being quantified over is a
collection of "individuals" who live outside of logic itself. For
example, we could be quantifying over the natural numbers, or over the
kinds of animals in the world, or any other extra-logical group of entities.
In Haskell, when we see the above expression we think that the thing
being quantified over is some collection of types. But, that means
when we think of P as a function it's taking types and returning types!
So the thing you're quantifying over and the thing you're constructing
are from the same domain! This gets logicians flustered and so they
call it "second-order" (or more generally, "higher-order")
quantification. If you assert the primacy of first-order logic, it makes
sense right? In the first-order case we're quantifying over individuals;
in the second-order case we're quantifying over collections of
individuals; so third-order would be quantifying over collections of
collections of individuals; and on up to higher orders.
Personally, I find the names "first-order" and "second-order" rather
dubious--- though the distinction is a critical one to make. Part of the
reason for its dubiousness can be seen when you look at PTSes which make
explicit that (1), (2), and (3) above can each be the same or different
in all combinations. First-order quantification is the sort of thing you
get from Pi/Sigma types in dependently typed languages like LF;
second-order quantification is the sort of thing you get from the
forall/exists quantifiers in polymorphic languages like System F. But
the Barendregt cube makes it clear that these are *orthogonal* features.
Neither one of them comes "first" or "second"; they just do very
different things. Logicians thought first of first-order quantification,
so they consider that primitive and worry about the complications of
dealing with second-/higher-order quantification. Until recently type
theory was focused on polymorphism of various sorts, so that's
considered primitive or at least easier, whereas dependent types are
exotic and worrisome/hard.
 This is also the root of the distinction between "functions",
"predicates", and "logical connectives" in traditional logic. Functions
map individuals to individuals; predicates map individuals to
propositions/truth-values; and connectives map propositions/TVs to
propositions/TVs. Though, traditionally, the set of connectives is taken
to be small and fixed whereas functions and predicates are left to be
defined by a "signature".
 Namely monotypes. If we allowed quantified variables to be
instantiated with polymorphic types, then we'd get "impredicative"
quantification. Impredicativity can be both good and bad, depending on
what you're up to.
 Well, they're only the same domain for impredicative quantification.
Otherwise the collection of types you're constructing is "bigger" than
the collection you're quantifying over.
More information about the Haskell-Cafe