<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">2018-05-05 21:23 GMT+02:00 Andreas Klebinger <span dir="ltr"><<a href="mailto:klebinger.andreas@gmx.at" target="_blank">klebinger.andreas@gmx.at</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">[...] I came across cases where inverting conditions lead to big performance losses since suddenly block layout<br>
got all messed up. (~4% slowdown for the worst offenders). [...]<br></blockquote><div> </div><div>4% is far from being "big", look e.g. at <a href="https://dendibakh.github.io/blog/2018/01/18/Code_alignment_issues" target="_blank">https://dendibakh.github.io/<wbr>blog/2018/01/18/Code_<wbr>alignment_issues</a> where changing just the alignment of the code lead to a 10% difference. :-/ The code itself or its layout wasn't changed at all. The "Producing Wrong Data Without Doing Anything Obviously Wrong!" paper gives more funny examples.<br></div><div><br></div><div>I'm not saying that code layout has no impact, quite the opposite. The main point is: Do we really have a benchmarking machinery in place which can tell you if you've improved the real run time or made it worse? I doubt that, at least at the scale of a few percent. To reach just that simple yes/no conclusion, you would need quite a heavy machinery involving randomized linking order, varying environments (in the sense of "number and contents of environment variables"), various CPU models etc. If you do not do that, modern HW will leave you with a lot of "WTF?!" moments and wrong conclusions.</div><div><br></div></div></div></div>