<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">2017-09-26 18:35 GMT+02:00 Ben Gamari <span dir="ltr"><<a href="mailto:ben@smart-cactus.org" target="_blank">ben@smart-cactus.org</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">While it's not a bad idea, I think it's easy to drown in information. Of<br>
course, it's also fairly easy to hide information that we don't care<br>
about, so perhaps this is worth doing regardless.<br></blockquote><div><br></div><div>The point is: You don't know in advance which of the many performance characteristics "perf" spits out is relevant. If e.g. you see a regression in runtime although you really didn't expect one (tiny RTS change etc.), a quick look at the diffs of all perf values can often give a hint (e.g. branch prediction was screwed up by different code layout etc.).</div><div><br></div><div>So I think it's best to collect all data, but make the user-relevant data (runtime, code size) more prominent than the technical/internal data (cache hit ratio, branch prediction hit ratio, etc.), which is for analysis only. Although the latter is a cause for the former, from a compiler user's perspective it's irrelevant. So there is no actual risk in drowning in data, because you primarily care only for a small subset of it.</div></div></div></div>