[Haskell-cafe] How far compilers are allowed to go with optimizations?

Iustin Pop iusty at k1024.org
Sat Feb 9 10:23:10 CET 2013


On Sat, Feb 09, 2013 at 09:56:12AM +0100, Johan Holmquist wrote:
> As a software developer, who typically inherits code to work on rather
> than simply writing new, I see a potential of aggressive compiler
> optimizations causing trouble. It goes like this:
> 
> Programmer P inherits some application/system to improve upon. Someday
> he spots some piece of rather badly written code. So he sets out and
> rewrites that piece, happy with the improvement it brings to clarity
> and likely also to efficiency.
> 
> The code goes into production and, disaster. The new "improved"
> version runs 3 times slower than the old, making it practically
> unusable. The new version has to be rolled back with loss of uptime
> and functionality and  management is not happy with P.
> 
> It just so happened that the old code triggered some aggressive
> optimization unbeknownst to everyone, **including the original
> developer**, while the new code did not. (This optimization maybe even
> was triggered only on a certain version of the compiler, the one that
> happened to be in use at the time.)
> 
> I fear being P some day.
> 
> Maybe this is something that would never happen in practice, but how
> to be sure...

An interesting point, but I think here "P" is still at fault. If we're
talking about important software, there will be regression tests, both
in terms of quality and performance? Surely there will be a canary
period, parallel running of the old and new system, etc.?

regards,
iustin



More information about the Haskell-Cafe mailing list