[Haskell-cafe] Re: Properties of optimizer rule application?

Henning Thielemann lemming at henning-thielemann.de
Mon Jan 21 06:29:09 EST 2008


On Thu, 17 Jan 2008, Simon Peyton-Jones wrote:

> | To give a precise example: If I have a sequence of 'map's
> |   map f0 . map f1 . ... . map fn
> |  then there is some length where this is no longer collapsed to a single
> | 'map'?
>
> (a) GHC tries to do as much as possible in a single iteration of the
> simplifer; I think it uses an outermost-first strategy for this.
>
> (b) For each phase it runs the simplifier until nothing changes, or a
> maximum of N times, where N is settable by a command-line-flag
> -fmax-simplifier-iterations.  After N it stops running that phase, even
> if the simplification has not terminated.

This means that the simplifier follows a specific direction (outermost to
inner or vice versa). I shall not rely on the order, but I must expect
that there is an order, which restricts the application of rules. If it
would really do "as much as possible in a single iteration" then there
would be nothing left to do after one iteration, since the set of
applicable rules remains the same within one phase.

> | However then I wonder, how it is possible to make the compiler to
> | go into an infinite loop by the rule
> |
> |    "loop"   forall x,y.  f x y = f y x
>
> Yes, it's possible.  Remember (a) does "as much as possible", which in your rule means rather a lot.

"as much as possible" in a particular order (which I shall not rely on),
right?

> In this thread Roman and I have described stuff that isn't in the
> manual.  Henning, would you feel like elaborating the Wiki page
>         http://haskell.org/haskellwiki/GHC/Using_rules
> (which already has a lot of info) to reflect what you've learned?  That way it's preserved for others.

I have added many points and I hope I haven't made things more confusing
as they are and I have set more links and added more articles to
  http://www.haskell.org/haskellwiki/Category:Program_transformation


I think I also found a typo:

http://www.haskell.org/ghc/docs/latest/html/users_guide/pragmas.html#phase-control

The last line in the list, should certainly be
  "NOINLINE[~k] f" means: be willing to inline f until phase k, but from phase k onwards do not inline it.
   ^^


Recently I found that specialisation interacts in an unexpected way with
explicit RULES (and with inlining). I used a function multiple times and
this seemed to make GHC specialising this function (although I did not
used a SPECIALISE pragma) to the particular type. But then this function
was no longer available for fusion. It reminds me on the sharing problem -
it is not always an optimization to share common sub-expressions. In this
case the common sub-expression was a function which was used with the same
type (thus same class dictionary) for each call.

Also in one case declaring a function 'foo' as INLINE [0] avoided fusion
of 'foo', where NOINLINE [0] did the fusion in phase 2. I assumed that
these two pragmas are identical in phases before 0.

Summarized I think it is not only required to have better control over the
phases of the optimizer but to have a clear unifying concept of several
kinds of program transformations, namely SPECIALISE, INLINE, RULES.


More information about the Haskell-Cafe mailing list