Re[Haskell-cafe] [2]: Re[2]: Re[2]: Reduction Sequence of simple Fibonacci sequence implementation

wren ng thornton wren at freegeek.org
Mon Aug 31 02:25:23 EDT 2009


Luke Palmer wrote:
> On Fri, Aug 28, 2009 at 6:04 AM, Bulat Ziganshin wrote:
>> it looks like cache of values computed since the last GC, because on
>> GC all those intermediate results will be collected. i think it's not
>> very useful outside of fib example that does exact that - reusing
>> recently computed values
> 
> I wouldn't be so quick to call it useless.  This caching idea, when
> combined with HNF instead of WHNF reduction, leads to a strategy which
> is capable of automatically specializing away layers of
> interpretation.  That is to say, it turns all interpreters into
> compilers.
> 
> Now, there are some disadvantages too, some of which have to do with
> memory performance.   But it's not clear that the disadvantages always
> outweigh the advantages; rather I suspect you get a strategy which
> implies a whole different set of trade-offs for engineering efficient
> programs.

Indeed.

The fibs example highlights the biggest place where memoization wins: 
dynamic programming. DP can dramatically reduce the asymptotic 
complexity of all sorts of recursive algorithms. The idea of automatic 
pervasive memoization (done right) is nothing short of teaching the 
compiler to do DP internally.

The analogy between laziness and memoization is so apt because laziness 
ala GHC is one restricted version of memoization. Both can improve 
asymptotic complexity over naive code, or can introduce undesired 
bloating. Both can improve clarity by saying what you mean, or can lead 
to inscrutable program performance depending on the whims of the 
Sufficiently Smart Compiler and the programmer's knowledge of its guts. 
Many of the laziness quirks about how to order traversals show up in 
spades with DP. (In fact, converting an algorithm into a DP version 
often amounts to 'just' reordering the traversals.)

In Dyna we were working on automatic pervasive memoization from the 
perspective of Prolog and inductive databases. That approach works 
particularly well for a lot of natural language processing tasks, but 
it's biting off a lot (and inductive databases went the way of 
artificial intelligence because of that complexity). I'd be interested 
in seeing a language like Luke describes, which considers APM more from 
the direction of laziness and partial compilation. Putting the two 
together would make for a whole new era in programming (or at least an 
awesome language for academe).

-- 
Live well,
~wren


More information about the Haskell-Cafe mailing list