[Haskell-beginners] Algorithmic Advances by Discovering Patterns in Functions

Jay Sulzberger jays at panix.com
Tue Sep 18 23:20:07 CEST 2012



On Sat, 15 Sep 2012, Costello, Roger L. <costello at mitre.org> wrote:

> Hi Folks,

< ... />

> Okay, now we are ready to embark on our journey of discovery. We will examine three functions and find their common pattern.
>
> The functions process values from this recursive data type:
>
> data Nat = Zero | Succ Nat
>
> Here are some examples of Nat values:
>
> Zero, Succ Zero, Succ (Succ Zero), Succ (Succ (Succ Zero)), ...
>
> The following functions perform addition, multiplication, and exponentiation on Nat values. Examine the three functions. Can you discern a pattern?
>
> -- Addition of Nat values:
> m + Zero = m
> m + Succ n = Succ(m + n)
>
> -- Multiplication of Nat values:
> m * Zero = Zero
> m * Succ n = (m * n) + m
>
> -- Exponentiation of Nat values:
> m ** Zero = Succ Zero
> m ** Succ n = (m ** n) * m
>
> For each function the right-hand operand is either Zero or Succ n:
>
> (+m) Zero
> (+m) Succ n
>
> (*m) Zero
> (*m) Succ n
>
> (**m) Zero
> (**m) Succ n
>
> So abstractly there is a function f that takes as argument either Zero or Succ n:
>
> f Zero
> f Succ n
>
> Given Zero as the argument, each function immediately returns a value:
>
> (+m)  Zero  	= m
> (*m)  Zero  	= Zero
> (**m) Zero  	= Succ Zero
>
> Let c denote the value.
>
> So, invoke f with a value for c and Zero:
>
> f c Zero = c
>
> For addition provide m as the value for c:
>
> m + Zero = f m Zero
>
> For multiplication provide Zero as the value for c:
>
> m * Zero = f Zero Zero
>
> For exponentiation provide (Succ Zero) as the value for c:
>
> m ** Zero = f (Succ Zero) Zero
>
> Next, consider how each of the functions (addition, multiplication, exponentiation) deals with Succ n.
>
> Given the argument Succ n,  each function processes n and then applies a function to the result:
>
> (+m)  Succ n  	= (Succ)((+m) n)
> (*m)  Succ n  	= ((*m) n) (+m)
> (**m) Succ n 	= ((**m) n) (*m)
>
> Let h denote the function that is applied to the result.
>
> So, Succ n is processed by invoking f with a value for h and Succ n:
>
> f h (Succ n) = h (f n)
>
> For multiplication provide (+m) as the value for h:
>
> m * Succ n = f (+m) (Succ n)
>
> Recall that (+m) is a function: it takes one argument and returns the sum of the argument and m.
>
> Thus one of the arguments of function f is a function.
>
> f is said to be a higher order function.
>
> [Definition: A higher order function is a function that takes an argument that is a function or it returns a function.]
>
> Continuing on with the other values for h:
>
> For addition provide (Succ) as the value for h:
>
> m + Succ n = f (Succ) (Succ n)
>
> For exponentiation provide (*m) as the value for h:
>
> m ** Zero = f (*m) (Succ n)
>
> Recap: to process Nat values use these equations:
>
> f c Zero = c
> f h (Succ n) = h (f n)
>
> Actually we need to supply f with h, c, and the Nat value:
>
> f h c Zero = c
> f h c (Succ n) = h (f h c n)
>
> f has this type signature: its arguments are a function, a value, and a Nat value; the result is a value. So this is f's type signature:
>
> f :: function -> value -> Nat -> value
>
> That type signature is well-known in functional programming. It is the signature of a fold function. Here is some intuition on why it is called the fold function:
>
> Imagine folding a towel. You make your first fold. The second fold builds on top of the first fold. Each fold builds on top of the previous folds. That is analogous to what the function f does. "You make your first fold" is analogous to immediately returning c:
>
> f h c Zero = c
>
> "Each fold builds on top of the previous folds" is analogous to processing Succ n by applying h to the result of processing the n'th Nat value:
>
> f h c (Succ n) = h (f h c n)
>
> Recap: In analyzing the functions (addition, multiplication, exponentiation) on Nat we have discovered that they are all doing a fold operation.
>
> That is a remarkable discovery.
>
> Let's now see how to perform addition, multiplication, and exponentiation using f. However, since we now recognize that f is actually the fold function, let's rename it foldn (fold Nat). Here is its type signature and function definition:
>
> foldn			:: (a -> a) -> a -> Nat -> a
> foldn h c Zero		=  c
> foldn h c (Succ n)	=  h (foldn h c n)
>
> The functions are implemented as:
>
> m + n 			= foldn Succ m n
> m * n 			= foldn (+m) Zero n
> m ** n 			= foldn (*m) (Succ Zero) n
>
> Let's take an example and trace its execution:
>
> Zero + Succ Zero
> 	= foldn Succ Zero (Succ Zero)
> 	= Succ (foldn Succ Zero Zero)
> 	= Succ (Zero)
>
> The holy trinity of functional programming is:
>
> 1.  User-defined recursive data types.
> 2.  Recursively defined functions over recursive data types.
> 3.  Proof by induction: show that some property P(n) holds for each element of a recursive data type.
>
> Nat is a user-defined recursive data type. The addition, multiplication, and exponentiation functions are recursively defined functions over Nat.
>
> We saw that the fold function can be defined for the Nat data type. In fact, a suitable fold function can be defined for every recursive data type.
>
> Wow!
>
> There are two advantages of writing recursive definitions in terms of fold:
>
> 1.  The definitions are shorted; rather than having to write down two equations, we have only to write down one.
> 2.  It is possible to prove general properties of fold and use them to prove properties of specific instantiations. In other words, rather than having to write down many inductive proofs, we have only to write down one.
>
> Recap: we have approached programming like a scientist; namely, we have examined a set of algorithms (the addition, multiplication, and exponentiation functions), discovered that they have a recurring pattern, and then evolved the algorithms to a higher order algorithm. That is advancing the field. Incremental advancements such as this is what takes Computer Science to new heights.
>
> /Roger

A. V. Borovik has a introduction to mathematical logic, made just
of examples, called "Shadows of the Truth":

   http://www.maths.manchester.ac.uk/~avb/ST.pdf

In the draft 0.819, dated 25 January 2012, which I just now
pulled off the Net, chapter 5, which starts on page 45 (61 in my
pdf reader), has a discussion of how to define addition,
multiplication, and exponentiation, of positive integers,
starting from (a version of) Peano's Axioms.  I looked at the
book a few years ago and I remember that in this section there
appeared the question:

   Addition and multiplication obey the commutative law.
   Addition and multiplication obey the associative law.
   But exponentiation does not.
   Why?

I do not see this question posed explicitly in the chapter, I may
have missed it, but there does appear the fact:

   OK, addition and multiplication can be defined for Z/n (by which
   we mean the integers modulo n) by the "same" definition as is used
   for the positive integers.
   But that definition fails for exponentiation.


In Phil Wadler's video "Church's Coincidences"

   http://www.youtube.com/watch?v=3D2PJ_DbKGFUA

a note by S. C. Kleene is shown.  The note lays out a definition
of the inverse of the successor function, done for Church's
coding of the integers into (some variant, indeed the original)
lambda calculus.  There was some difficulty in finding the
definition.


Yesterday oleg at okmij.org posted to Haskell-cafe a note with
subject "Church vs Boehm-Berarducci encoding of Lists".  There
the puzzle of how to define cdr in the encoding, and more
generally, how to do destructuring-bind^W"pattern matching",
Haskell sense, in the general case, is mentioned.


I think these three things

1. the difference between exponentiation and (addition and multiplication)

2. the difficulty of defining subtraction in Church's encoding of
    the non-negative integers

3. the difficulty of defining pattern matching in the
    Church-Boehm-Berarducci encoding

might have something to do with one another.

oo--JS.



More information about the Beginners mailing list