on termination

C T McBride c.t.mcbride@durham.ac.uk
Wed, 14 May 2003 10:25:13 +0100 (BST)


> On Tuesday 13 May 2003 16:08, Cagdas Ozgenc wrote:
> > Indeed, I am still looking for a theorem of some sort that either equates
> > stongly normalizing systems to primitive recursive functions (if this is
> > the case), or a theorem that demonstrates some sort of limitation to
> > strongly normalizing systems when compared to a turing machine, or a system
> > that can decide recursive class but unable to recognize recursively
> > enumerable class.

There ain't no such thing. Recursiveness is undecidable. More concretely,
a strongly normalizing system captures a class of terminating functions
which is necessarily incomplete: a simple diagonalization argument shows
that the system's own normalization function (terminating, by definition)
cannot be coded within the system itself.

> Hmm, I think this depends on what a strongly normalizing system is :)
> Slightly going out of my sphere of knowledge (^_^)

A *strongly* normalizing system is one where *all* reduction strategies
yield a normal form. Type theories are usually designed to be strongly
normalizing, hence they are Turing incomplete; they do contain all
programs which are *provably* terminating in the particular logic to which
they correspond, but that logic, being consistent, is necessarily also

(Incidentally, Cayenne's type theory is the exception rather than the
rule: it's much-criticized undecidable typechecking comes down to the
availability of non-terminating programs at the type level. ghc with
`undecidable instances' allows you to write non-terminating Prolog
programs at the type level, thus breaking for exactly the same reason.
What exactly is Haskell's type level programming language, anyway?)

Of course, restricting to an incomplete fragment does not necessarily
mean `primitive recursion' in the narrow sense which excludes Ackermann's
function, viz

  data Nat = O | S Nat

  primrec :: Nat -> (Nat -> Nat -> Nat) -> Nat -> Nat
  primrec base step  O    = base
  primrec base step (S n) = step n (primrec base step n)

Just a bit of polymorphism is enough to overcome that problem:

  polyprimrec :: t -> (Nat -> t -> t) -> Nat -> t
  polyprimrec base step  O    = base
  polyprimrec base step (S n) = step n (polyprimrec base step n)

Old fashioned dependent type theories present computation over inductive
datatypes by equipping their structural induction principles with a
computational behaviour---an inductive proof of all n : Nat . P n, when
applied to 3, reduces to step 2 (step 1 (step 0 base)), just like
polyprimrec, but with

    base :: P 0,
and step :: all n : Nat. P n -> P (S n).

Suitably equipped with `pattern matching' sugar, these induction
principles provide quite a rich language of terminating functions,
necessarily incomplete, but including such notoriously troublesome
customers as first-order unification.

Moreover, a structural induction principle is just the recursion operator
which comes `free of charge' with an inductive datatype: many other total
recursive idioms can be proven admissible once (by a process which
resembles coding up a memoization scheme), then re-used freely in

So there's no magic complete answer to guaranteed termination, but it is,
I claim, useful to work within an incomplete but terminating language
if it enables you te express the structural reasons why your program
works, not just what it should do next.

Hope this helps