[Haskell-cafe] Type-Level Programming
wren ng thornton
wren at freegeek.org
Thu Jul 1 18:05:58 EDT 2010
Andrew Coppin wrote:
> wren ng thornton wrote:
>> Andrew Coppin wrote:
>>> It's a bit like trying to learn Prolog from somebody who thinks that
>>> the difference between first-order and second-order logic is somehow
>>> "common knowledge". (FWIW, I have absolutely no clue what that
>>> difference is.
>>
>> First-order means you can only quantify over simple things.
>> Second-order means you can also quantify over quantification, as it were.
>
> Sure. I get the idea that "second-order" means "like first-order, but
> you can apply it to itself" (in most contexts, anyway).
Kind of. More accurately you can apply it to itself once. That is,
continuing the quantification version:
First we have a set S. In the 0th-order world we can only pick out
particular elements of S by name, we can't say "oh, for any old S". In
the 1st-order world we are allowed to quantify over elements of S (i.e.,
pick out subsets). However, in so doing, we implicitly create a new set
S' which contains the names of all possible 1st-order quantifications
over S. And in the 1st-order setting we have no way of quantifying over
S', we must pick out elements of S' by name. But in the 2nd-order
setting we can use 2nd-order quantification in order to say "any old
element of S'"[1]. Of course, this will implicitly create a set S'' of
2nd-order quantifications... ... ...
In practice, there seems to be something magical about 2nd-order
systems. That is, while there are differences between 2nd-order and
higher-order (i.e., >=2 including the limit), these differences tend to
be trivial/obvious, whereas the leap from 1st-order systems to 2nd-order
systems causes remarkable and catastrophic[2] changes. So technically
2nd-order systems are restricted to only quantifying over their
1st-order subsystems, but they introduce the notion of self-reference,
which is as different from quantification as quantification is from naming.
> But what the heck does "quantification" mean?
In the simplest sense, it's the ability to enumerate some collection.
Though that only raises more questions: "which collection?", "how big of
a collection can it be?" or equivalently "how powerful of an enumerator
does it use?", "is it guaranteed to yield *every* element of the
collection?",... In an only somewhat more general sense, it's a question
philosophers (and logicians) have argued over for centuries.
But more apropos, the distinction between orders is really a distinction
about the power/nature of a system/theory. Quantification over sets is
an example of a theory where we'd want to distinguish orders, but it
isn't the only one. The categorization of lambda calculi into layers of
terms, types, kinds, etc. is another example (where each layer is an
order). The other example I have about Theory of Mind is yet another
example, and perhaps the clearest for explaining what the distinction is
ultimately trying to get at. In 1st-order theories we can generate
hypotheses about the world, but in 2nd-/higher-order theories we can
also generate hypotheses about our theories. Again, the 2nd-order
introduces the notion of self-reference, the idea of a theory containing
itself as an object, and the infinite regress of impredicativity.[3]
[1] And we can still use 1st-order quantification in order to enumerate
S. Frequently mathematicians don't bother distinguishing the different
varieties of quantification available in a given system; so long as we
know what the upper limit is, we use the same notation for everything
under that limit. Sometimes, however, we do need to distinguish the
different quantifiers. For example, this shows up in the stratification
of universes to avoid impredicativity in Coq, Agda, etc... this is
usually hidden/implicit from the programmer, though it shows up in the
compiler/type checker.
[2] That is, world-altering. Not necessarily bad.
[3] Though noone can really say what "impredicativity" *is* either ;)
--
Live well,
~wren
More information about the Haskell-Cafe
mailing list