[Haskell-cafe] Troubles understanding memoization in SOE
Peter Verswyvelen
bf3 at telenet.be
Wed Sep 26 16:25:26 EDT 2007
Paul L wrote:
> We recently wrote a paper about the leak problem. The draft is at
> http://www.cs.yale.edu/~hl293/download/leak.pdf. Comments are welcome!
I'm trying to understand the following in this paper:
(A) repeat x = x : repeat x
or, in lambdas:
(B) repeat = λx → x : repeat x
This requires O(n) space. But we can achieve O(1) space by writing instead:
(C) repeat = λx → let xs = x : xs in xs
Let me see if I understand this correctly. Since I'm an imperative
programmer, I'll try a bit of C++ here.
struct Cell : Value
{
Value* head;
Value* tail;
};
So in (A) and (B), a Cell c1 is allocated, and c1->head would be a
pointer to x, and c1->tail would be a pointer to a newly allocated Cell
c2, etc etc, hence O(n) space complexity
In (C) however, a Cell xs is allocated, and xs->head is also a pointer
to x, but xs->tail is a pointer the cell xs again, creating one circular
data structure, hence O(1) space complexity.
Is this more or less correct?
I'm also trying to figure out how the "fixed point combinator" works, so
the fix f = f (fix f), and it's effect on space/time complexity. Any
good tutorials on that one? Or is this the one
http://haskell.org/haskellwiki/Recursive_function_theory. Looks a bit
scary at first sight ;-)
Thanks again,
Peter
More information about the Haskell-Cafe
mailing list