[Haskell-cafe] Copy on read
Stefan Holdermans
stefan at cs.uu.nl
Wed May 14 17:43:56 EDT 2008
Dan,
Let me first apologize for this late reply.
Neil pointed you to
>> Jurriaan Hage and Stefan Holdermans. Heap recycling for lazy
>> languages. In John Hatcliff, Robert Glück, and Oege de Moor, editors,
>> _Proceedings of the 2008 ACM SIGPLAN Symposium on Partial Evaluation
>> and Semantics-Based Program Manipulation_, PEPM'08, San Francisco,
>> California, USA, January 7--8, 2008, pages 189--197. ACM Press, 2008.
>> http://doi.acm.org/10.1145/1328408.1328436.
and you commented:
> If I'm reading it aright, it works by tagging the types of values as
> unique or not but keeps that annotation secret from the programmer.
> The typing rules are incredibly complicated so to make things less
> scary, they are also hidden from the user. As a result, this idea
> makes it impossible for a developer to reason about whether their code
> will compile. That doesn't seem like a viable solution. Have I read
> this correctly?
Well, let me first out point out that the paper by no means describes
a complete solution. When we wrote it, we were mainly interested in
the static semantics of pure nonstrict functional languages featuring
a means to perform in-place updates explicitly. We use uniqueness
types to ensure that such updates are safe, i.e., do not violate
referential transparency, so that we can keep on enjoying the benefits
of purity---equational reasoning being the most obvious example of
such benefits.
Our choice for denoting in-place updates in the source program
explicitly is a very conscious one. Of course, having opportunities
for in-place updates inferred by the compiler is highly desirable, but
there's always a chance that the compiler does not find all the
locations in the source program in which an update can be inserted. A
programmer, having specific knowledge of the behaviour of the program
and the opportunities for in-place updates that arise, may then expect
updates to be inserted only to find out, when profiling the memory
behaviour of the program, that certain updates where not inferred.
What to do then? Restructuring the program, hoping that the
optimization can be found if some pieces of code are altered, seems
particularly undesirable; we then rather insert the updates by hand.
Thus, explicit updates, at the very least, can be a valuable addition
to a system that tries to infer updates automatically.
As of now, the material in the paper is merely an exercise in static
semantics: we have not implemented the system in a full-scale
compiler. But to do so, ranks high on our wish list. Maybe it makes an
interesting student project. By all means, we would like to get our
hands on a mature set of benchmark results.
As mentioned in the paper, the viability of a system of explicit
updates may well depend on the particularities of the underlying
memory model. This may further complicate the semantics and, worse,
make the semantics backend-dependent. Note, that backend consideration
also play a role in a system in which all updates are implicit, i.e,
inferred. The Clean language, for example, features uniqueness typing
as a means to deal with side-effects in general and the Clean compiler
uses uniqueness information to insert in-place updates of memory
cells. Opportunities for in-place updates, however, do not solely
depend on the uniqueness of values but, indeed, also on the details of
the backend involved as illustrated by a recent thread on the Clean
Discussions mailing list:
http://mailman.science.ru.nl/pipermail/clean-list/2008/003794.html
(in particular: http://mailman.science.ru.nl/pipermail/clean-list/2008/003795.html)
.
With regard to your concerns: in a system with explicit updates
there's always a chance that a programmer uses updates in an unsafe
fashion, i.e., in such a way that referential transparency can no
longer be guaranteed. Simply issuing a type-error message that
involves richly annotates types may be a bit harsh then as these types
tend to grow rather complex. We do not as much suggest to hide these
types from the programmer altogether, but we think it's at the very
least useful to come up with an error message that does a good job
explaining what went wrong. (What comes to mind almost immediately is
the PhD work of Bastiaan Heeren: http://people.cs.uu.nl/bastiaan/phdthesis/.)
An interesting point here is that there are always at least two
program points that are "to blame" here: the location of the actual
explicit update and the point (or points) at the which the flow of the
value to update may fork.
So, in short: as far as we are concerned, there's no definite story on
this subject yet. For those attending Fun in the Afternoon tomorrow at
the University of Herfordshire: Jur will give a talk about this
material, roughly based on the stuff we presented at PEPM in January.
Then, Malcolm remarked:
> Uniqueness typing does not lead to in-place update. If a value is
> only used once, then there is no need to update it at all! In fact,
> I believe ghc does this analysis, and has a minor optimisation that
> omits the thunk-update. That is, if an unevaluated thunk is forced,
> but it has a unique usage, the original thunk is not overwritten
> with an indirection to the reduced value (as it normally would be),
> but the reduced value is just used directly.
The analysis he's referring to is not really uniqueness analysis, but
what we would like to call sharing analysis. We've commented on how
these analyses relate in a paper (co-written with Arie Middelkoop)
that we presented at last year's ICFP:
Jurriaan Hage, Stefan Holdermans, and Arie Middelkoop. A generic
usage analysis with
subeffect qualifiers. In Ralf Hinze and Norman Ramsey, editors,
Proceedings of the 12th
ACM SIGPLAN International Conference on Functional Programming,
ICFP 2007, Freiburg,
Germany, October 1--3, 2007, pages 235--246. ACM Press, 2007.
http://doi.acm.org/10.1145/1291151.1291189.
Indeed, uniqueness analysis and sharing analysis are very alike: in a
type-based formulation both can be regarded (more or less) as linear
typing extended with a subsumption relation. The difference between
uniqueness analysis and sharing analysis then shows in the direction
of the subsumption relation: in uniqueness analysis "possibly used
more than once" subsumes "used no more than once", while in sharing
analysis "used no more than once" subsumes "possibly used more than
once".
Cheers,
Stefan
More information about the Haskell-Cafe
mailing list