Updates to FFI spec
alastair at reid-consulting-uk.ltd.uk
Mon Aug 12 06:33:41 EDT 2002
> System.Mem.performGC does a major GC. When would a partial GC be
I've described the image-processing example a bunch of times.
We have an external resource (e.g., memory used to store images) which
is somewhat abundant and cheap but not completely free (e.g.,
eventually you start to swap). It is used up at a different rate than
the Haskell heap so Haskell GCs don't occur at the right times to keep
the cost low and we want to trigger GCs ourselves. (In the image
processing example, images were megabytes and an expression like (x +
(y * mask)) would generate 2 intermediate images (several megabytes)
while doing just 2 reductions in Haskell.)
How often and how hard should we GC? We can't do a full GC too often,
or we'll spend a lot of time GCing, destroy our cache and cause
premature promotion of Haskell objects into the old generation which
will make the GC behave poorly. So if all we can do is a full GC,
we'll GC rarely and use a lot of the external resource.
Suppose we could collect just the allocation arena. That would be
much less expensive (time taken, effect on caches, confusion of object
ages) but not always effective. It would start out cheap and
effective but more and more objects would slip into older generations
and have to wait for a full GC.
To achieve any desired tradeoff between GC cost and excess resource
usage, we want a number of levels of GC: gc1, gc2, gc3, gc4, ... Each
one more effective than the last and each one more expensive than the
last. We'll use gc1 most often, gc2 less often, gc3 occasionally, gc4
It seemed important to do this 4-5 years ago when we were doing image
processing with Hugs. Now, with memories much, much larger (cf. Hal
Daume's 'why does my program segfault with a 4Gb heap' report) but not
appreciably faster, it is even more important.
> Isn't the property you really want "a full garbage collection has
> been performed and all finalizers have been run to completion"?
With Hugs, we used feedback between the rate of deallocation/
allocation and the rate of GCing. The feedback was easy because the
GC didn't complete until all objects were freed.
With GHC, it would be harder because of the delay between GC and
release. What I'd do is wander round to the university library, find
the section on control theory, find a good book on feedback loops and
implement something from there.
> I think the spec should be clarified along these lines:
> Header files have no impact on the semantics of a foreign call,
> and whether an implementation uses the header file or not is
> implementation-defined. Some implementations may require a header
> file which supplies a correct prototype for the function in order to
> generate correct code.
I still don't like the fact that compilers are free to ignore header
files. Labelling it an error instead of a change in semantics doesn't
affect the fact that portability is compromised.
> I'd be equally happy (perhaps happier) if the header file spec was
> removed altogether. In a sense, this would leave the Haskell part
> of a foreign binding even more portable, because it doesn't have to
> specify the names of header files which might change between
> platforms. There are already "external" parts of a binding such as
> the names of link libraries, I think I'm arguing that header files
> also fall into that category.
Making header files completely external seems to encourage omission of
header files and non-portability.
And, annoying as it is to limit ourselves to one header file, it has
the nice effect that there is a file out there that Hugs can use - I
don't have to go figure out how the GHC Makefiles and configure system
contrive to pass the right include files on the command line: it's
right there in the file for Hugs to see. (And vice-versa, of course.)
> Perhaps on GHC you should be required to "register" the top module
> in your program first, maybe something like
> that way you can register multiple modules (which isn't possible at
> the moment, you have to have another module which imports all the
What does that do? Is it for threading, GC, profiling, ...?
Do I have to unregister the module later?
More information about the FFI