[Haskell] Newbie : How come that cyclic recursive lists are efficient ?

Francis Girard francis.girard at free.fr
Mon Jan 24 04:38:35 EST 2005


Hi,

The classical Hamming problem have the following solution in Haskell :

*** BEGIN SNAP
-- hamming.hs

-- Merges two infinite lists
merge :: (Ord a) => [a] -> [a] -> [a]
merge (x:xs)(y:ys)
  | x == y    = x : merge xs ys
  | x <  y    = x : merge xs (y:ys)
  | otherwise = y : merge (x:xs) ys

-- Lazily produce the hamming sequence
hamming :: [Integer]
hamming 
  = 1 : merge (map (2*) hamming) (merge (map (3*) hamming) (map (5*) hamming))
*** END SNAP

I just love these algorithms that run after their tail (they make my brain 
melt) but I don't know how is it that they are efficient.

Here, the hamming recursively calls itself three times. For this algorithm to 
be efficient, the Haskell system, somehow, has to "remember" the already 
generated sequence THROUGH RECURSION (i.e. not only intermediate "local" 
results) otherwise it would end up regenerating the beginning of the sequence 
over and over again.

Obviously, Haskell does remember what had already been generated THROUGH 
RECURSION since executing the program with GHCI runs quite smoothly and 
responsively.

That Haskell manages to do that is for me "magnifique". But I need to know (if 
only a little) about how it achieves this in order to know what I, as a 
lambda programmer, can do, and how to compute the Big-Oh complexity of the 
algorithm.

Thank you,

Francis Girard
FRANCE



More information about the Haskell mailing list