[Haskell] SPJ book - big idea behind supercombinator reduction
restriction
paul boston
paul_bostonjp at hotmail.co.uk
Sat Oct 8 13:36:49 EDT 2005
Simon Peyton Jones's book "The Implementation of Functional Programming
Languages", chapter 13 (Supercombinators and Lamba-lifting), page 222,
describes a "multi-argument" beta-reduction strategy whereby several free
variables of a multi-argument abstraction may be instantiated
simultaneously.
Then, on page 225 the following sentence appears:
"A crucial point in the definition of a supercombinator given above is that
a supercombinator reduction only takes place when all the arguments are
present."
I feel that there's some big idea in the pages between these two references,
but I can't quite grasp it.
I can appreciate that there are good reasons (e.g. efficiency) to
instantiate several variables at once, but I don't quite understand why this
restriction is "crucial".
Is it crucial just for reasons of efficiency - and in actual fact it's
entirely possible, albeit inefficient, to instantiate one variable at a
time? Or is it, given that the whole point of supercombinators is to allow
compilation to fixed code sequences for the G-machine, that there is a good
reason to avoid having these fixed code sequences deal with instantiating
one free variable at a time (simplicity of design, or some kind of
restriction on the way that G-machine operations work)? Or is there a deeper
theoretical reason related to the combinators themselves that has nothing to
do with the downstream compilation process?
_________________________________________________________________
Be the first to hear what's new at MSN - sign up to our free newsletters!
http://www.msn.co.uk/newsletters
More information about the Haskell
mailing list