The Revenge of Finalizers

Alastair Reid alastair at
Sat Oct 19 07:42:39 EDT 2002

> Do we /want/ a blackhole?  If so, then I can fix nhc98.  If not,
> then I can't immediately see a good solution.

A blackhole would be wrong semantically because:

- blackholes mean that a value depends on itself - that isn't true here

- changing evaluation order shouldn't change whether the program
  produces an answer or what answer it produces (as would happen in
  this case if you deferred the finalizer or ran the finalizer
  immediately the object was garbage (i.e., before the main thread

- decreasing sharing shouldn't change whether the program produces an
  answer or what answer it produces but it would change the result in
  this program.

btw What abstract machine is NHC based on?  Given its Swedish origins,
I expect it's a slightly modified G machine?

In my paper about the stack->heap trick

I claimed that it was a fix for a problem in the STG machine which was
not present in pure graph-reduction implementations.  That is, I
claimed that pure graph-reduction implementations don't need
blackholing and can be interrupted/context-switched at the end of any
reduction step.  Not only is this not true of teh STG-machine, but it
isn't true of G-machine implementations either -I think blackholing
was first described for LML-.  The problem is that the G-machine
optimizes away some of the updates which make sure that the heap is
always in a consistent state in a pure graph-reduction system.

This leaves me wondering whether there is some way to fix G-machine
implementations so that it is always possible to abort a thread (i.e.,
a finalizer) and restart it later in just the same way that reverting
black holes lets us do that with the STG machine?  (There's then a
separate question as to whether it is feasible to apply this to Hugs
and NHC without rewriting their evaluators from scratch.  I'm not
overly optimistic about this but who knows?)


More information about the FFI mailing list