[Haskell-cafe] Question: Lazy Incremental Evaluation and Haskell?
benjamin.redelings at duke.edu
Mon Oct 10 18:16:19 CEST 2011
On 10/08/2011 12:46 PM, Jan-Willem Maessen wrote:
> On Fri, Oct 7, 2011 at 2:46 PM, Brandon Moore<brandon_m_moore at yahoo.com> wrote:
>>> Margnus Carlsson did something monadic several years ago.
>>> Perhaps there is an implementation on Hackage or on his website.
>>> This stuff also goes by the moniker "adaptive computation". See the
>>> references and citations of that paper for more on this.
>> Umut Acar now seems to refer to this as "self-adjusting computation",
>> and has some work here:
>> In particular, there seems to be a modified version of Mlton.
> To tie things together a bit, Magnus Carlsson's paper was based on
> Umut Acar's earlier work. Note in particular that there's a lot of
> emphasis placed on efficiently figuring out what computation needs to
> be re-done (and some theory to support those efficiency claims). FRP
> frameworks, etc. naively re-do rather too much computation (all of it,
> in particularly poor cases) compared to systems specifically tailored
> to self-adjustment.
> -Jan-Willem Maessen
1. Thank you! This is the kind of thing I was looking for. It
(a) uses compiler/interpreter infrastructure (not explicit programmer
(b) construct a dynamic dependency graph that reflects the structure of
I am curious if anyone has constructed a modified STG machine (which
doesn't modify running code, I guess?) or alternatively a graph-based
machine like Sestoft's mark 1 machine (that actually modifies running
code) that would track dependencies. That would be call-by-need instead
(Just for clarification, I am interested in calculations where the
individual operations (e.g. analogous to '+') are extremely slow and are
currently written in C++. Therefore, a certain amount of overhead seems
2. Another question would be: most of the "self-adjusting computation"
frameworks seem to assume that we always throw away the old
computation. For function optimization (or, alternatively, Markov chain
Monte Carlo random walks) we keep either the old or the new computation
based on the result of the new computation. Thus, we would not like to
simply over-write the old computation when we do the new computation.
This could mean splitting (part of) the dynamic dependency graph, which
might incur a lot of memory allocation overhead...
More information about the Haskell-Cafe