ANN: Normal-order evaluation as bottom-up parsing

oleg@pobox.com oleg@pobox.com
Sun, 28 Apr 2002 14:11:59 -0700 (PDT)


This message announces a normal-order lambda-calculator integrated
with Haskell. The calculator implements what seems to be an efficient
and elegant algorithm of normal order reductions. The algorithm is
"more functional" than the traditionally used approach. The algorithm
seems identical to that employed by yacc sans one critical difference.
The calculator also takes a more "functional" approach to the hygiene
of beta-substitutions, which is achieved by coloring of identifiers
where absolutely necessary. This approach is "more functional" because
it avoids a "global" counter or the threading of the paint bucket
through the whole the process. The integration with Haskell lets us
store terms in variables and easily and intuitively combine them. A
few examples (including 'greaterp', a term comparing two Church
numerals in magnitude) show how easy it is to program the calculator.

This message is an indirect reply to a recently posted query about the
treatment of free variables. The lambda-calculator in this message
shows one way of doing that.

The traditional recipe for normal-order reductions includes an
unpleasant phrase "cook until done". The phrase makes it necessary to
keep track of reduction attempts, and implies an ugly iterative
algorithm. We're proposing what seems to be efficient and elegant
technique that can be implemented through intuitive re-writing rules.
Our calculator, like yacc, possesses a stack and works by doing a
sequence of 'shift' and 'reduce' steps. We consider an application (a
b) as a _delayed_ normalization. We delay dealing with 'b' and with
the application until we figure out what to do with the term 'a'. The
calculator's stack contains all the terms to be applied to the current
one. The evaluator does a 'reduce' step when it is sure it has the
redex or it is sure it does not (e.g., (x (\y. y)) where 'x' is
free). When the evaluator is not sure about the current term, it
'shifts'. The only difference from yacc is that the lambda-calculator
"reparses" the result after the successful reduce step. The source and
the target languages of our "parser" (lambda-calculator) are the same;
therefore, the parser can indeed apply itself.

The evaluator's source code:
    http://pobox.com/~oleg/ftp/Haskell/Lambda_calc.lhs

explains the algorithm in detail. The file has five times more lines
of comments than lines of code. The file also expounds upon the
hygiene of beta-substitutions achieved by "local painting".

The following are few examples:

First, we make a few variables.

> make_var = Var . VC 0  -- a convenience function
> [a,b,c,x,y,z,f,g,h,p,q] =
>    map make_var ["a","b","c","x","y","z","f","g","h","p","q"]

If we type at the hugs prompt:
LC> eval $ (x ^ x # x) # (y ^ y # z)

we will see the answer "z z"
   
The substitutions are hygienic, as the following example demonstrates:
LC> eval $ a ^ (x ^ a ^ a # x) # (a # x)
    ==> (\a. (\a~1. a~1 (a x)))

A less trivial example (of integration with Haskell):

> c0 = f ^ x ^ x		-- Church numeral 0
> succ = c ^ f ^ x ^ f # (c # f # x)		-- Successor

> c1 = eval $ succ # c0   -- pre-evaluate other numerals
> c2 = eval $ succ # c1
> c3 = eval $ succ # c2
> c4 = eval $ succ # c3

It is indeed convenient to store terms in Haskell variables and
pre-evaluate (i.e., normalize) them. They are indeed terms. We can
always ask the interpreter to show the term. For example, "show c4"
yields "(\f. (\x. f (f (f (f x)))))".

> mul = a ^ b ^ f ^ a # (b # f)		        -- multiplication

Now, the interesting thing:
   eval $ mul # c1 ---> (\b. b), the identity function
   eval $ mul # c0 ---> (\b. (\f. (\x. x))), which is "const 0"

These are _algebraic_ results: multiplication of any number by zero is
always zero. Another algebraic result:
   eval $ zerop # (succ # a) ---> (\x. (\y. y)), that is, false.
Forall a, (succ a) is not zero. We can see now how lambda-calculus can
be useful for theorem proving (even over universally-quantified
formulas).

Finally, as even less trivial example, here is the term that compares two
Church numerals in magnitude. The example is excerpted from LC_neg.lhs
referenced below. The latter file provides all the definitions and a
long explanation.

> greaterp = eval $
>   a ^ b ^
>      -- Checking if anything is left of the list: if the current
>      -- cell is nil
>     (car
>      # (b
>        -- deconstructing the list. Once we hit the nil, we stop
>        # (z ^ lif # (car # z) # (cdr # z) # z)
>        -- Building the list
>        # (a
>	   # (z ^ (cons # true # z))
>	   # (cons # false # false)) -- Initial cell, the nil
>      ))

This is the bona fide lambda-term, with no cheating. We can ask the
interpreter to print it, to make sure:

LC_neg> greaterp
(\a. (\b. b (\z. z (\x. (\y. x)) (z (\x. (\y. y))) z) (a (\z. (\p. p
(\x. (\y. x)) z)) (\p. p (\x. (\y. y)) (\x. (\y. y)))) (\x. (\y. x)))) 

To check, we can evaluate
  eval $ greaterp # c1 # c0  ---> (\x. (\y. x)), that is, true
  eval $ greaterp # c2 # c3  ---> (\x. (\y. y)), that is, false

Note that using Haskell variables makes programming in lambda calculus
pleasant.

Files
	http://pobox.com/~oleg/ftp/Haskell/LC_basic.lhs
	http://pobox.com/~oleg/ftp/Haskell/LC_neg.lhs
show many more definitions and examples of lambda arithmetics and
algebraic equalities.


To be added shortly is the code to compare terms modulo
alpha-conversion, and the systematic treatment of negative numbers and
their operations (including division). The latter merely need to be
ported from the different notation.