An idea for a different style of metaprogramming evaluation using the optimiser

Matthew Pickering matthewtpickering at gmail.com
Tue Feb 27 09:59:27 UTC 2018


I've had an idea for a while of a different way we could evaluate
TH-like splices which
would be more lightweight and easier to work with.

The idea is to create a third quotation/splicing mechanism which has
no introspection (like MetaML) but then to evaluate these quotes and
splices in the optimiser rather than using the bytecode interpreter.

I am motivated by the many examples of recursive functions in
libraries, which when given a statically known argument should be able
to be unrolled to produce much better code. It is understandable that
the compiler doesn't try to do this itself but there should be an easy
way for the user to direct the compiler to do so. (See also
https://www.reddit.com/r/haskell/comments/7yvb43/ghc_compiletime_evaluation/)

An example to be concrete:

Take the power function such that power k n computes k^n.

power :: Int -> Int -> Int
power k n = if n == 0
                    then 1
                    else k * power k (n - 1)

If we statically know n then we can create a staged version. We use R
to indicate that an
argument is dynamically known.

power :: R Int -> Int -> R Int
power k n = if n == 0
                    then .< 1 >.
                    else .< ~k * ~(power k (n-1)) >.

One way to implement this in Haskell is to use typed template haskell
quoting and splicing.
The key thing to notice about why this works is that in order to
splice `power k (n-1)` we need to
evaluate `power k (n-1)` so that we have something of type `R Int`
which we can then splice back into the quote.

The way this is currently implemented is that the bytecode interpreter
runs these splices in order to find out what they evaluate to.

The idea is that instead, we add another mode which runs splices in
the core-to-core optimiser. The optimiser performs evaluation by beta
reduction, inlining and constant folding. For simple definitions on
algebraic data types it does a very good job of eliminating overhead.
As long as the call is not recursive. If we can just get the optimiser
to inline recursive calls in a controlled manner as well, it should do
a good job on the unrolled definition.

The benefits of this are that it works across all platforms and fits
nicely already into the compilation pipeline. It also fits in nicely
with the intuition that a "quote" means to stop evaluating and a
"splice" means to evaluate.

A disadvantage is that the optimiser is only a *partial* evaluator
rather than an evaluator. It gets stuck evaluating things containing
primitives and so on. I don't forsee this as a major issue but a
limitation that library authors should be aware of when writing their
staged programs. To go back to the power example, the recursive
condition would have to be an inductively defined natural (data N = Z
| S N) rather than an Int as the comparison operator for integers
can't be evaluated by the optimiser. It is already rather easy to
write staged programs which loop if you don't carefully consider the
staging so this seems now worse than the status quo.

Does this sound at all sensible to anyone else?

Matt


More information about the ghc-devs mailing list