[Haskell-cafe] Re: Properties of optimizer rule application?

Roman Leshchinskiy rl at cse.unsw.edu.au
Wed Jan 16 07:34:47 EST 2008

Henning Thielemann wrote:
> Reading various papers and the Wiki about GHC optimizer rules I got the
> impression that there are not much properties I can rely on and I wonder
> how I can write a reliable fusion framework with this constraint.

That depends on your definition of reliable. You can't have a framework 
which fuses everything that can be fused but then, I don't think that's 
even theoretically possible. You can, however, have a framework which 
does a pretty good job.

>  I read about the strategy to replace functions early by fusable
> implementations and replace them back to fast low-level implementation if
> fusion was not possible. However, can I rely on the back-translation if I
> have no warranty that the corresponding rule is applied? Is there some
> warranty that rules are applied as long as applicable rules are available
> or is the optimizer free to decide that it worked enough for today?
>  I see several phases with a fixed number of iterations in the output of
> -ddump-simpl-iterations. Is there some idea behind these phases or is the
> structure and number rather arbitrary? If there is only a fixed number of
> simplifier runs, how can I rely on complete fusion of arbitrary large
> expressions?

In general, you can't. You can control the number of simplifier phases 
with -fsimplifier-phases (in the HEAD only) and the number of iterations 
in each phase with -fmax-simplifier-iterations.

That said, there are other things that break fusion (such as code 
getting between two functions you want to fuse). Again, you can only try 
to make your framework good enough; it'll never be perfect.

>  At some place I read that the order of application of rules is arbitrary.
> I like to have some warranty that more special rules are applied before
> more general rules. That is, if rule X is applicable whereever Y is
> applicable then Y shall be tried before X. This is currently not assured,
> right?

IIRC, ghc tries more specific rules first but that's somewhat 
unreliable. You can make rule X inactive in simplifier phase 2, however. 
Then, only rule Y will be tried in phase 2; both rules will be tried in 
subsequent phases.

I suspect, though, that ordering requirements on rules might indicate a 
problem in the design of the fusion framework. I think they are best 

>  Another text passage tells that the simplification is inside-out
> expressions. Such a direction would make the design of rules definitely
> easier. Having both directions, maybe alternating in the runs of the
> simplifier, would be also nice. I could then design transforms of the
> kind:
 >   toFastStructure . slowA . slowB . slowC . slowWithNoFastCounterpart
 >   fastA . toFastStructure . slowB . slowC . slowWithNoFastCounterpart
 >   fastA . fastB . toFastStructure . slowC . slowWithNoFastCounterpart
 >   fastA . fastB . fastC . toFastStructure . slowWithNoFastCounterpart
 >   fastA . fastBC . toFastStructure . slowWithNoFastCounterpart
 >   fastABC . toFastStructure . slowWithNoFastCounterpart

Again, I don't think you really want to rely on the order of 
simplification. For your example, you only need the following rules:

toFastStructure (slow{A|B|C} x) = fast{A|B|C} (toFastStructure x)
fastB (fastC x) = fastBC x
fastA (fastBC x) = fastABC x

They do not require any specific traversal order.

>  On the one hand the inner of functions may not be available to fusion, if
> the INLINE pragma is omitted. As far as I know inlining may take place
> also without the INLINE pragma, but I have no warranty. Can I rely on
> functions being inlined with INLINE pragma?

No. The inliner still uses heuristic to determine if inlining really is 
beneficial. If you want to be sure, use rewrite rules.

 > Somewhere I read that
> functions are not inlined if there is still an applicable rule that uses
> the function on the left-hand side. Altogether I'm uncertain how inlining
> is interleaved with rule application. It was said, that rules are just
> alternative function definitions. In this sense a function definition with
> INLINE is a more aggressively used simplifier rule, right?

No, rules are "more aggressive" since they are applied unconditionally.

>  On the other hand if I set the INLINE pragma then the inner of the
> function is not fused. If this would be the case, I could guide the
> optimizer to fuse several sub-expressions before others. Say,
>   doubleMap f g = map f . map g
>  could be fused to
>   doubleMap f g = map (f . g)
>  and then this fused version can be fused further in the context of the
> caller. The current situation seems to be that {-# INLINE doubleMap #-}
> switches off local fusion and allows global fusion, whereas omitting the
> INLINE pragma switches on local fusion and disallows global fusion. How
> can I have both of them?

If you say {-# INLINE doubleMap #-}, you really expect doubleMap to be 
inlined and never to be called explicitly; therefore, you don't really 
care too much what actually happens to it. You can, however, do 
something like:

{-# NOINLINE doubleMap #-}
doubleMap f g = map f . map g   -- will be fused

   "doubleMap" forall f g. doubleMap f g = map f . map g

This makes sure that the rhs of doubleMap will be optimised for those 
cases when the rule doesn't fire (e.g., when doubleMap is applied to 
less than two arguments).


More information about the Haskell-Cafe mailing list