[Haskell-cafe] How far compilers are allowed to go with optimizations?

Brandon Allbery allbery.b at gmail.com
Sat Feb 9 14:56:19 CET 2013


On Sat, Feb 9, 2013 at 3:56 AM, Johan Holmquist <holmisen at gmail.com> wrote:

> The code goes into production and, disaster. The new "improved"
> version runs 3 times slower than the old, making it practically
> unusable. The new version has to be rolled back with loss of uptime
> and functionality and  management is not happy with P.
>
> It just so happened that the old code triggered some aggressive
> optimization unbeknownst to everyone, **including the original
> developer**, while the new code did not. (This optimization maybe even
>

This leads ultimately to not allowing compilers to optimize at all.  I
suspect that's a bad plan.  Keep in mind that a modern web application may
be heavily enough used that it doesn't even need to be a
"hyper-optimization"; even small changes in performance can scale to large
performance differences.

Also... what happens when it's not just manual optimization but a bug fix
that triggers this?

Maybe this is something that would never happen in practice, but how
> to be sure...
>

If this really scares you, disable all compiler optimization.  Now you can
be sure even at large scales where even small changes can have huge
effects... and now you'd better be good at hand optimization.  And writing
code in assembly language so you can get that optimization.

This sounds like going backwards to me.

-- 
brandon s allbery kf8nh                               sine nomine associates
allbery.b at gmail.com                                  ballbery at sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/haskell-cafe/attachments/20130209/a1b71b7e/attachment.htm>


More information about the Haskell-Cafe mailing list