[Haskell-cafe] How far compilers are allowed to go with optimizations?

wren ng thornton wren at freegeek.org
Wed Feb 13 08:33:06 CET 2013


On 2/11/13 11:47 AM, Johan Holmquist wrote:
> I was about to leave this topic not to swamp the list with something
> that appears to go nowere. But now I feel that I must answer the
> comments, so here it goes.
>
> By "agressive optimisation" I mean an optimisation that drastically
> reduces run-time performance of (some part of) the program. So I guess
> automatic vectorisation could fall into this term.

In that case, I'd say automatic vectorization counts. List fusion also 
counts, but I'm pretty sure you don't want to get rid of that either. 
IMHO, the only thing that distinguishes "aggressive" optimizations from 
the rest is that programmers find them to be finicky. There are two 
general causes of perceived finickiness:

(1) The optimization really is finicky and is triggered or not in highly 
unpredictable ways. This is often the case when a optimization is new, 
because it hasn't been battle-tested enough to make it reliable to the 
diversity we see in real-world code. The early days of list fusion were 
certainly like that. As were the early days of optimizations that depend 
on alias detection. IME, these things tend to get straightened out in a 
few version cycles. If they don't, then the optimization truly is 
finicky and therefore is something bad that should be avoided. I haven't 
found that situation to be very common however.

(2) The optimization is reliable (enough) but the programmer doesn't 
understand it well enough and thus inadvertently breaks it when doing 
"innocuous" code transformations. Again, this is generally the case any 
time a new optimization shows up. The only way to fix this, really, is 
to wait for people to learn new habits and to internalize a new model of 
how the compiler works. Good explanations of the technology often help 
that process along; but don't obviate the need for the process.


Both of those situations are triggered by an optimization being new, and 
both of them tend to be resolved when the optimization is no longer new. 
Thus, I don't think it makes much sense to disallow compilers from 
making "aggressive" optimizations, because it is only through doing so 
that those optimizations can be rendered non-aggressive.

In all of this, I'm reminded of a recent article on code metrics:

     http://www.neverworkintheory.org/?p=488

The key idea there is to use automatically generated code refactorings 
in order to see how different versions of "the same" code were rated. 
Perhaps it'd be worth doing this sort of thing in order to identify 
which optimizations are stable under particular code transformations.

-- 
Live well,
~wren



More information about the Haskell-Cafe mailing list