controlling resource usage

Jon Fairbairn Jon.Fairbairn@cl.cam.ac.uk
Fri, 07 Sep 2001 19:09:22 +0100


> due to cache coherency issues.) In that context, there was an old
> suggestion of Don Knuth's that program source code should in the ideali=
zed
> future not just contain the final, optimized code (with perhaps the
> original code in comments) but as a kind of set of layers each
> representing the same semantic program but with increasing orders of
> optimisation, and links between `things' annotated with a label placed =
by
> the programmer explaining why the transformation between an unoptimised=

> version `thing' and an optimised `thing'. =


[snip]

> I suspect that such a system is still at least 25 years away from
> feasiblity though :-(

That seems to me to be an overestimate. One of the things
I've had at the back of my mind since just after I faded
from view (as it were), was a kind of formalised hand
compilation. =


The background is that an automatic programme transformation
system never knows as much about the problem as the
programmer, so it has little chance of making the right
decisions. In addition, of course, the prerequisites for
certain transformations are undecidable. As a result,
although research has been done on automatic programme
transformation for a long time, people still rewrite
programmes by hand. The same applies to ordinary compilation
(after all, compilation _is_ automatic programme
transformation): strictness analysis is fine, but if your
programme needs strictness and the compiler can't spot it,
you have to annotate by hand anyway.

So, roughtly speaking, the idea is that a programme would
consist of two objects, the unoptimised source code, and an
optimising compiler written specifically for that
programme. Obviously we'd provide default_compile, so that
to begin with one would need do nothing special.

Then we'd want a way of breaking down the compilation into
individual functions, for which we could specify different
compilation strategies. For example, if =


   f [] =3D ...
   f x =3D h x ++ f (g x)

we might specify =


   compile f =3D default_compile (eliminate_append f)

(Not sure what the syntax should be, or how to specify to
which function a new rule applies)

At the back of my mind is something like the way proofs are
constructed in HOL, though I'm thinking of a system where
the ordinary programmer doesn't write new optimisations,
just gets to select which ones are used on which bits of the
programme. The optimisations can be fairly dull and complain
if the function to which they are applied doesn't match
their pattern. Producing new optimisations would be a task
for a guru, probably armed with a proof assistant. Not that
there's any law that some programmers can't also be such
gurus, of course.

I'm certainly not in a fit state to have a go at this
myself, but I'm sure there's lots of stuff from the
programme transformation and proof assistant worlds that
could be applied here, but with the advantage that it
doesn't have to work automatically. There's also scope for
permitting the application of transformations that may be
only partially correct -- we have that with $! already.


I'd want the compilation and optimisation stuff to be kept
in a separate file from the programme itself, but would
ultimately hope for tools that allowed the two to be
manipulated together.


I tried mentioning this to a beginning PhD student a number
of years ago, but somehow it didn't excite him.  Maybe
someone out there has a vision th t more closely matches
mine?

 J=F3n


-- =

J=F3n Fairbairn                                 Jon.Fairbairn@cl.cam.ac.u=
k
31 Chalmers Road                                         jf@cl.cam.ac.uk
Cambridge CB1 3SZ            +44 1223 570179 (after 14:00 only, please!)