[Haskell-cafe] Detecting Cycles in Datastructures

Greg Woodhouse gregory.woodhouse at sbcglobal.net
Fri Nov 18 15:03:03 EST 2005



--- Paul Hudak <paul.hudak at yale.edu> wrote:

> 
> I suspect from your other post that you haven't seen the standard
> trick of encoding infinite data structures as fixpoints.  Suppose you
> have a lambda calculus term for cons, as well as for the numeral 1.
> Then the infinite list of ones is just:
> 

Not in quite those terms. Intuitively, I think of the infinite data
stucture here as being a kind of a "completion" of the space of
ordinary (finite) graphs to a space in which th given function actually
does have a fixed point.

> Y (\ones. cons 1 ones)
> 
> where Y (aka the "paradoxical combinator" or "fixed point
> combinator")
> is defined as:
> 
> \f. (\x. f (x x)) (\x. f (x x))
> 

Now, this is I have seen, but it frankly strikes me as a "formal
trick". I've never felt that this definition of Y gave me much insight
into the computation you describe below.

> To see this, try unfolding the above term a few iterations using
> normal-order reduction.  Note that the term has no recursion in it 
> whatsoever.
> 
> Now, my earlier point above is that:
> 
> Y (\ones. cons 1 ones)
> 
> unwinds to the same term as:
> 
> Y (\ones. cons 1 (cons 1 ones))
> 
> yet the two terms (\ones. cons 1 ones) and
> (\ones. cons 1 (cons 1 ones)) are not the same (in fact they're not 
> lambda convertible, either).
> 
>    -Paul

And this is what leaves me scatching my head. If you leave off the
"ones", then you get a sequence of finite lists [1], [1, 1], ... that
seems to approximate the infinite list, but I find it difficult to
imagine how you might pluck a formula like \i. 1 (the ith term)  out of
the air.
> 



===
Gregory Woodhouse  <gregory.woodhouse at sbcglobal.net>


"Interaction is the mind-body problem of computing."

--Philip Wadler













More information about the Haskell-Cafe mailing list