<div dir="ltr"><div dir="ltr"><div>Hello,<br></div><div><br></div><div>All the questions you have asked are important, but before I answer them I'd like to emphasize that as a research platform it is worth supporting WPC/WPA even with the worst case memory and runtime properties. It would allow researchers to observe Haskell programs with arbitrary precision. Not giving access to the whole program IR simply rules out certain types of research directions. IMO this is an important issue, because some experiment might reveal something that is applicable in practice or might be implemented in the incremental compilation pipeline as well. I find it cool to allow developers to use GHC in an unexpected or unintended way that may lead to something new and valuable.<br></div>It is common sense that whole (or large chunks) program analysis and compilation is infeasible. I believe that common sense needs to be reevaluated time to time, because the conditions may change. E.g.. Would the existence of huge amounts of RAM with multicore CPUs or GPU or quantum computers change the situation? Not to mention the new research results of the static analysis field.<br><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><p class="MsoNormal"><span>Questions that come to mind (to be understood in the context of the above enthusiasm):</span></p>
<ul style="margin-top:0cm" type="disc"><li style="margin-left:0cm"><span>If you compile a program that depends on (say) lens, you get a lot of code. Dead-code elim will drop lots, perhaps, but you start with everything.
So what do memory footprints and compile times look like when you do WPC? Remembering that people often complain about GHC’s footprint when compiling a
<i>single</i> module.</span><span> <br></span><span>Also, WPC means that instead of just linking to precompiled libraries, you have to recompile (parts of) them. What does that do to compile times?</span></li></ul></div></blockquote><div><div class="gmail_quote">I deliberately split the WPC pipeline into IR export and import/codegen pieces. GHC-WPC does only the STG IR export, and the External STG compiler could be seen as a separate project that uses GHC-WPC as frontend and GHC as backend. It is important to note that compilation pipeline design is not fixed in this setting. It could implement the regular per module incremental compilation, or could batch compile multiple modules or even the whole program. The IR export part is the same for each case, only the backend driver differs.<br>While it is true that the GHC-WPC project implements the whole program compiler scenario currently, I also plan to extend it to allow incremental compilation with arbitrary sized program clusters. So it would not use the source code structure based module or package boundaries, instead the compilation unit size granularity could be changed at will. Possibly driven by some static or runtime analysis. It would be a free parameter that can be adjusted to get an acceptable compilation time, and as the software and hardware advances the practical compilation unit size would increase.</div><div class="gmail_quote">It is also possible to implement incremental whole program compilation. This technique is already used in the Visual C++ compiler: <a href="https://dl.acm.org/doi/pdf/10.5555/3049832.3049857" target="_blank">Incremental Whole Program Optimization and Compilation</a></div><div class="gmail_quote"><br></div><div class="gmail_quote"><div>The current whole program compilation pipeline first compiles the project with GHC-WPC. GHC-WPC compiles the target project through the standard GHC backend and generates executable, in addition it exports the Ext-STG IR. Then gen-exe is executed and the whole stg program compilation is done using the following steps:<br><ol><li>extracting liveness datalog facts from each project module</li><li>running the whole program liveness analysis</li><li>per module link time codegen for the whole project cutting out the dead top level functions using the liveness analysis result</li></ol></div>I designed the pipeline to have a light memory footprint. I intentionally implemented the static analyses in Datalog using the Souffle Datalog compiler. Souffle generates small and efficient parallel (OpenMP) C++ code that stores the in-memory database in specialised data types (Trie/B-tree).<br>Another important design choice was to collect the datalog facts incrementally for static analysis, instead of loading the whole program IR to the memory. The insight is that it is not the complete IR that is needed for whole program analysis, but just a projection of the IR. It is perfectly OK to calculate that projection on a per module basis, then run the static analysis on the collected data. Furthermore it is even possible to store the collected facts in files separately for each module for later reuse.</div></div><br><div>I measured the compilation of pandoc and a simple (hello world like) project. I put the collected data grouped by compilation stages into bullet points with notes:</div><div><ul><li>ext-stg export<br>Currently the serializer uses binary via the generic instance, which is not the fastest method. The store package via TH would be the fastest way of exporting the IR, but it is not in the foundation libraries. Also at the current stage of the project this is not an issue (binary is good enough).</li><li>liveness datalog fact generator<br>simple: 2.107545216s<br>pandoc: 18.431660236s</li><li>liveness datalog facts size:<br>simple: 11 MB<br>pandoc: 186 MB</li><li>liveness analysis (CSV) result size:<br>simple: 92 KB<br>pandoc: 14 MB</li><li>liveness analysis memory usage:<br>simple: Maximum resident set size: 15 MB<br>pandoc: Maximum resident set size: 109 MB</li><li>liveness analysis run time:<br>simple: 0.118736736s<br>pandoc: 2.322970868s</li><li>link time codegen via GHC:<br>simple: 15.683492995s<br>pandoc: 790.070061268s<br>memory usage: around 100 MB (typical GHC module compilation footprint)</li><li>gen-exe is the whole stg program compiler driver:<br>simple:<br>Maximum resident set size: 186 MB<br>Elapsed (wall clock) time: 18.63 sec<br>pandoc:<br>Maximum resident set size: 401 MB<br>Elapsed (wall clock) time: 13 min 35 sec</li></ul>GHC-WPC Ext-STG IR serializer<br></div><div><ul><li>without ext-stg export -O0 from scratch with deps (full world)<br>simple: 5.455 sec<br>pandoc: 8 min 54.363 sec</li><li>with ext-stg export -O0 from scratch with deps (full world)<br>simple: 5.576 sec<br>pandoc: 12 min 50.101 sec</li></ul></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><p class="MsoNormal" style="margin-left:36pt"><span></span></p>
<ul style="margin-top:0cm" type="disc"><li class="gmail-m_-1391975889066283759MsoListParagraph" style="margin-left:0cm"><span>I love the 25% reduction in binary size. In fact I’m surprised it isn’t bigger.</span><span> </span>
<span></span></li></ul></div></blockquote><div> I used a simple and fast method for the link time dead code elimination. It calculates the references of the module top level functions transitively, then only the reachable ones are passed to the codegen. So it does not eliminate the local closures, nor the dead case alternatives. Check the analysis source code how simple the implementation is: <a href="https://github.com/grin-compiler/ghc-whole-program-compiler-project/blob/master/external-stg-compiler/datalog/ext-stg-liveness.dl">ext-stg-liveness.dl</a></div><div><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" id="gmail-docs-internal-guid-316b64e8-7fff-2580-8d38-c57e52809896"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br></span></p>I also implemented a much more sophisticated control flow analysis, which could eliminate much more code. i.e.<br><ul><li>constructor data fields</li><li>case alternatives</li><li>function parameters</li><li>local closures<br></li></ul>It is future work to integrate this precise CFA into the external STG compiler pipeline. My initial goal was to create the simplest and fastest working whole compiler pipeline for GHC. So I prioritised my plans and tasks accordingly.<br>I'd like to mention that the current pipeline design is a result of iterative development. The first version was written entirely in Haskell. It comprised a points-to analysis and the simple liveness analysis. The Haskell version of these analyses worked fine for small programs, but the memory and runtime properties were bad. Of course I tried to add some strictness annotations, that helped a bit, but it was not good enough. Then I rewrote the points-to analysis in C++, which resulted in a much better memory footprint, but my fixed point evaluator was still naive so it was not fast enough. Also the C++ implementation increased the development and maintenance complexity that I did not like. Luckily I found the Souffle Datalog compiler that just fits my needs. It is super easy to write static analyses in Datalog and it does not compromise in the performance either. I'd like to write all these parts in Haskell but IMO it is not possible yet. Hopefully this will change in the future, maybe LoCal can make a big difference. (<a href="http://recurial.com/pldi19main.pdf">http://recurial.com/pldi19main.pdf</a>)</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><ul style="margin-top:0cm" type="disc"><li class="gmail-m_-1391975889066283759MsoListParagraph" style="margin-left:0cm"><span>Why
are you using STG? It’s mostly untyped – or at least much less
strongly typed than Core. It has lots of restrictions (like ANF) that
Core does not. Indeed I think of STG as a little lily-pad to alight
on in the hop from Core to Cmm. Maybe your entire setup would work
equally well with Core, provided you can serialise and deserialise it.</span></li></ul></div></blockquote>To me every IR layer is an API for the compiler. The long term existence of the different IRs must have a reason. As I understand, the GHC Core is about polymorphic reasoning and STG IR is about operational semantics with explicit notion of evaluation, allocation of closures/data constructors and pattern matching. Thirdly, Cmm is the language of the runtime system that expresses the lowest level memory operations.</div><div dir="ltr"><br></div><div dir="ltr">I chose STG because I am interested in the analysis of the operational semantics of Haskell programs. I see STG IR as an entry/exit point for the GHC pipeline. The actual IR data type does not matter because most researchers or developers will have their own needs regarding the IR conventions. I.e. I'd like to write analyses in Souffle/Datalog but both GHC Core and GHC STG are insufficient for direct datalog translation, because not every expression has a name, neither a unique one. But this is totally fine. It is impossible to design the perfect IR, instead everyone should do the customised conversion for their project.</div><div dir="ltr"><br></div><div dir="ltr">IMO STG is implicitly typed, its type system is the TYPE's RuntimeRep kind parameter that describes the value’s runtime representation. And this is exactly what I'm interested in, because the Core type system is not expressive enough for the optimizations that I'd like to implement. In fact GHC has the unarise pass that flattens unboxed tuples, and it is implemented in STG because it would not type check in Core. Also there are Cmm and LLVM transformations that are semantically valid but not typeable in GHC Core.<br></div><div dir="ltr">Ideally it would be great to have a single IR that could express both Core and Cmm properties in a typed way, but it would require a much stronger (probably dependent) type system. I find this idea a promising research direction.<br></div><div dir="ltr"><br></div><div dir="ltr">Indeed the whole GHC-WPC could support Core also, it would just be an additional export operation at the same place in the GHC and Cabal source code where STG gets exported. The same would be true for the Cmm IR.</div><div dir="ltr"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><p class="MsoNormal"><span></span></p>
<ul style="margin-top:0cm" type="disc"><li class="gmail-m_-1391975889066283759MsoListParagraph" style="margin-left:0cm"><span>Moreover, we *<b>already</b>* have a fast serialiser and deserialiser for Core – the stuff we use for interface files. So maybe you could
re-use that … no need for pretty-print and parse.</span></li></ul></div></blockquote><div>Ideally there would be a binary import/export facility and pretty printer for every IR layer. The speed might matter for the main use cases but for the experimental cases it is not an issue at all. Serializers would allow us to use GHC internal components with finer granularity and from arbitrary programming languages. Having a binary IR (de)serializer without a specification or without stability guarantees is still much better than having nothing.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>
<ul style="margin-top:0cm" type="disc"><li class="gmail-m_-1391975889066283759MsoListParagraph" style="margin-left:0cm">You
say “That would mean a significant conceptual shift in the GHC compiler
pipeline, because heavy optimizations would be introduced at the low
level IRs beside GHC Core.” Fair enough,
but what I’m missing is the <b>rationale</b> for doing heavy opts on STG rather than Core.<span></span></li></ul></div></blockquote><div> The Core type system is not expressive enough to reason about memory layout. Of course STG's RuntimeRep type system cannot describe the shape of the heap either, but at least it can be used as a starting point in an experiment.<br>Haskell and pure functional programming formed my views on how to automate programming tasks, and how to utilise the type system for specific domains. Haskell as a pure FP also shaped how I see compilers. The traditional view is that at the top of the compilation pipeline there is the high level language with a rich and rigorous type system, and during the compilation process (lowering) every lower level IR has a weaker type system that encodes slightly less semantic information. And at the level of assembly language or machine code there is nothing left from the original types and semantic information. So instead of this traditional approach I can imagine a pipeline that preserves high level semantic information during the lowering process. But for this the low-level IRs would require increasingly more expressive type systems. I'd like to explore this idea further and GHC would be a good platform for such an experiment. I also believe that this direction is the future of compilers and programming languages.</div><div><span> </span></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>
<ul style="margin-top:0cm" type="disc"><li class="gmail-m_-1391975889066283759MsoListParagraph" style="margin-left:0cm"><span>Apart from (a) dead code, and (b) GRIN, do you have ideas in mind for what we could do with WPC?</span></li></ul></div></blockquote><div>I have lots of ideas regarding the usage of the whole program IR.</div><div><ul><li>GHC-WPC's exported STG IR could be useful for any backend or codegen project.<br>The existing cross compilation projects like Asterius and GHCJS could use the External-STG as input IR. The benefit of this approach is that the alternative Haskell compiler backends do not need to be supported by Cabal directly instead they would entirely rely on the External STG IR and the exported linker metadata file.<br><br></li><li>External STG could allow using GHC as frontend and backend, with additional support of custom IR transformations.<br>E.g. maybe the <a href="http://iu-parfunc.github.io/gibbon/">Gibbon</a> language project could compile real-world Haskell programs via Ext-STG. Or maybe Purescript or Idris would add GHC with RTS as a new target. STG's weaker type system is actually beneficial for programming languages that differ from Haskell in evaluation model or type system.<br>Maybe I'll add external Core or external Cmm support also.<br><br></li><li>The Intel Haskell Research Compiler optimizer part is called Functional Language Research Compiler (<a href="https://github.com/IntelLabs/flrc#the-functional-language-research-compiler-">FLRC</a>) and its IR is called MIL. Its source is on github and it still compiles. I'd like to plug the FLRC vectorising and memory layout optimizer into GHC. FLRC's RTS was not optimized for laziness, however it would be interesting to see how it would perform with GHC's fine tuned GC and RTS.<br><br></li><li>I also plan to implement an STG interpreter with FFI support that could run any Haskell program.<br>With such an interpreter I could implement runtime control flow tracing, or a stop the world debugger that could observe and visualize the runtime heap values. I'd like to track the source origin and lifetime of run-time values. I'd use that information to build a better intuition of the Haskell program’s runtime behaviour. I hope that it will lead to important insights to improve the performance. I'm optimistic with this idea because AFAIK it has not been done yet. Haskell's GC removes the unreachable values from the memory that might be dead for a long time. According to the As Static As Possible Memory Management thesis, reachability is too conservative approximation of the actual liveness.<br><br></li><li>I’d like to do partial defunctionalization on the STG IR.<br>Regarding the engineering complexity of an optimizing compiler I believe it is the easiest to start with a whole program analysis and compilation. It is easier to construct a static analysis as a data flow analysis, then later with the insights the same analysis could be formulated as a type system. Which might enable compositional analysis, and therefore incremental compilation.</li></ul>I see GHC-WPC as a framework that would allow us to explore these ideas.</div><br>I often hear from researchers and Haskell experts that whole program compilation does not work. They seem so confident with this claim, and I'd like to understand why. I read several papers on the topic and also checked quite a few compilers’ source code. LLVM, GCC, Visual C++ is doing LTO. Intel Haskell Research Compiler research results, MLton, Boquist's GRIN supports the idea of whole program analysis and compilation. At least it seems reasonable to continue the research in this direction. I wonder if those who concluded against WPA read the same papers and projects.<br><br>Do you think that I am on the wrong track?<br><div>Am I chasing unimportant or impossible things?</div><div><br>Important papers:</div><div><ul><li>The Intel Labs Haskell Research Compiler<br><a href="http://www.leafpetersen.com/leaf/publications/hs2013/hrc-paper.pdf">http://www.leafpetersen.com/leaf/publications/hs2013/hrc-paper.pdf</a></li><li>Measuring the Haskell Gap<br><a href="http://www.leafpetersen.com/leaf/publications/ifl2013/haskell-gap.pdf">http://www.leafpetersen.com/leaf/publications/ifl2013/haskell-gap.pdf</a></li><li>LoCal: A Language for Programs Operating on Serialized Data<br><a href="http://recurial.com/pldi19main.pdf">http://recurial.com/pldi19main.pdf</a></li><li>On fast large-scale program analysis in Datalog<br><a href="https://souffle-lang.github.io/pdf/cc.pdf">https://souffle-lang.github.io/pdf/cc.pdf</a></li><li>ASAP: As Static As Possible memory management<br><a href="https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-908.pdf">https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-908.pdf</a></li><li>Pushdown Control-Flow Analysis for Free<br><a href="https://arxiv.org/abs/1507.03137">https://arxiv.org/abs/1507.03137</a></li></ul></div><div>Regards,</div><div>Csaba Hruska<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jun 15, 2020 at 11:34 AM Simon Peyton Jones <<a href="mailto:simonpj@microsoft.com">simonpj@microsoft.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div lang="EN-GB">
<div class="gmail-m_-1391975889066283759WordSection1">
<p class="MsoNormal"><span>I’ve always thought that whole-program compilation has the possibility of doing optimisations that are simply inaccessible without the whole program, but been daunted by the engineering challenges
of making WPC actually work. So it’s fantastic that you’ve made progress on this. Well done!
<u></u><u></u></span></p>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<p class="MsoNormal"><span>Questions that come to mind (to be understood in the context of the above enthusiasm):<u></u><u></u></span></p>
<ul style="margin-top:0cm" type="disc">
<li class="gmail-m_-1391975889066283759MsoListParagraph" style="margin-left:0cm"><span>If you compile a program that depends on (say) lens, you get a lot of code. Dead-code elim will drop lots, perhaps, but you start with everything.
So what do memory footprints and compile times look like when you do WPC? Remembering that people often complain about GHC’s footprint when compiling a
<i>single</i> module.<u></u><u></u></span></li></ul>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<p class="MsoNormal" style="margin-left:36pt"><span>Also, WPC means that instead of just linking to precompiled libraries, you have to recompile (parts of) them. What does that do to compile times?
<u></u><u></u></span></p>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<ul style="margin-top:0cm" type="disc">
<li class="gmail-m_-1391975889066283759MsoListParagraph" style="margin-left:0cm"><span>I love the 25% reduction in binary size. In fact I’m surprised it isn’t bigger.<u></u><u></u></span></li></ul>
<p class="gmail-m_-1391975889066283759MsoListParagraph"><span><u></u> <u></u></span></p>
<ul style="margin-top:0cm" type="disc">
<li class="gmail-m_-1391975889066283759MsoListParagraph" style="margin-left:0cm"><span>Why are you using STG? It’s mostly untyped – or at least much less strongly typed than Core. It has lots of restrictions (like ANF) that
Core does not. Indeed I think of STG as a little lily-pad to alight on in the hop from Core to Cmm. Maybe your entire setup would work equally well with Core, provided you can serialise and deserialise it.<u></u><u></u></span></li></ul>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<ul style="margin-top:0cm" type="disc">
<li class="gmail-m_-1391975889066283759MsoListParagraph" style="margin-left:0cm"><span>Moreover, we *<b>already</b>* have a fast serialiser and deserialiser for Core – the stuff we use for interface files. So maybe you could
re-use that … no need for pretty-print and parse.<u></u><u></u></span></li></ul>
<p class="gmail-m_-1391975889066283759MsoListParagraph"><span><u></u> <u></u></span></p>
<ul style="margin-top:0cm" type="disc">
<li class="gmail-m_-1391975889066283759MsoListParagraph" style="margin-left:0cm">You say “That would mean a significant conceptual shift in the GHC compiler pipeline, because heavy optimizations would be introduced at the low level IRs beside GHC Core.” Fair enough,
but what I’m missing is the <b>rationale</b> for doing heavy opts on STG rather than Core.<span><u></u><u></u></span></li></ul>
<p class="gmail-m_-1391975889066283759MsoListParagraph"><span><u></u> <u></u></span></p>
<ul style="margin-top:0cm" type="disc">
<li class="gmail-m_-1391975889066283759MsoListParagraph" style="margin-left:0cm"><span>Apart from (a) dead code, and (b) GRIN, do you have ideas in mind for what we could do with WPC?<u></u><u></u></span></li></ul>
<p class="gmail-m_-1391975889066283759MsoListParagraph"><span><u></u> <u></u></span></p>
<p class="MsoNormal"><span>Thanks<u></u><u></u></span></p>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<p class="MsoNormal"><span>Simon<u></u><u></u></span></p>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<div style="border-color:currentcolor currentcolor currentcolor blue;border-style:none none none solid;border-width:medium medium medium 1.5pt;padding:0cm 0cm 0cm 4pt">
<div>
<div style="border-color:rgb(225,225,225) currentcolor currentcolor;border-style:solid none none;border-width:1pt medium medium;padding:3pt 0cm 0cm">
<p class="MsoNormal"><b><span lang="EN-US">From:</span></b><span lang="EN-US"> ghc-devs <<a href="mailto:ghc-devs-bounces@haskell.org" target="_blank">ghc-devs-bounces@haskell.org</a>>
<b>On Behalf Of </b>Csaba Hruska<br>
<b>Sent:</b> 14 June 2020 13:46<br>
<b>To:</b> Alexis King <<a href="mailto:lexi.lambda@gmail.com" target="_blank">lexi.lambda@gmail.com</a>><br>
<b>Cc:</b> GHC developers <<a href="mailto:ghc-devs@haskell.org" target="_blank">ghc-devs@haskell.org</a>><br>
<b>Subject:</b> Re: Introducing GHC whole program compiler (GHC-WPC)<u></u><u></u></span></p>
</div>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
Hi,<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
<u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
I thought about the GHC-LTO project name before, but it would not be an accurate description though. The GHC-WPC in its current state is about exporting STG + linker info for later processing, either feed it back to GHC backend or to a third party pipeline.
It depends what the user/researcher wants, the point is that GHC-WPC solves the IR export part of the issue. It is the external stg compiler that implements a (simple) whole program dead function elimination pass that I implemented as a proof of concept to
show the new possibilities GHC-WPC opens up. But I plan to do much more optimization with sophisticated dataflow analyses. I.e. I have a fast and working implementation of control flow analysis in souffle/datalog that I plan to use to do more accurate dead
code elimination and partial program defunctionalization on the whole program STG IR. In theory I could implement all GRIN optimizations on STG. That would mean a significant conceptual shift in the GHC compiler pipeline, because heavy optimizations would
be introduced at the low level IRs beside GHC Core. I'd like to go even further with experimentation. I can imagine a dependently typed Cmm with a similar type system that ATS (<a href="https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.ats-lang.org%2FMYDATA%2FVsTsVTs-2018-10-28.pdf&data=02%7C01%7Csimonpj%40microsoft.com%7Cd49efc7bddcb4e35d70808d81060fbbb%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637277356582121415&sdata=lUgtWXs9etLZwQYbGDMWTt4Ea17YhsQLJpxRbN5iJSE%3D&reserved=0" target="_blank">http://www.ats-lang.org/MYDATA/VsTsVTs-2018-10-28.pdf</a>)
has. I definitely would like to make an experiment in the future, to come up with an Idirs2 EDSL for GHC RTS heap operations where the type system would ensure the correctness of pointer arithmetic and heap object manipulation. The purpose of GHC-WPC in this
story is to deliver the IR for these stuff.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
<u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
Beside exporting STG IR, the external STG compiler can compile STG via GHC's standard code generator. This makes GHC codegen/RTS available as a backend for programming language developers. I.e. Idris, Agda, Purescript could use GHC/STG/RTS as a backend with
all of its cool features.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
<u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
So these are the key parts of my vision about the purpose and development of GHC-WPC. It is meant to be more than a link time optimizer.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
<u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
Cheers,<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
Csaba<u></u><u></u></p>
</div>
</div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
<u></u> <u></u></p>
<div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
On Sat, Jun 13, 2020 at 10:26 PM Alexis King <<a href="mailto:lexi.lambda@gmail.com" target="_blank">lexi.lambda@gmail.com</a>> wrote:<u></u><u></u></p>
</div>
<blockquote style="border-color:currentcolor currentcolor currentcolor rgb(204,204,204);border-style:none none none solid;border-width:medium medium medium 1pt;padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm">
<div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
Hi Csaba,<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
<u></u> <u></u></p>
</div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
I originally posted this comment <a href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.reddit.com%2Fr%2Fhaskell%2Fcomments%2Fh7t8wr%2Fintroducing_ghc_whole_program_compiler_ghcwpc%2Ffuqdnye%2F&data=02%7C01%7Csimonpj%40microsoft.com%7Cd49efc7bddcb4e35d70808d81060fbbb%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637277356582121415&sdata=2NcIa7wPP%2Bcwu%2B8%2FoctSe8pj%2Fsl9O5BGbNIbsxTfj5U%3D&reserved=0" target="_blank">on
/r/haskell</a> before I saw you also sent this to ghc-devs. I’ve decided to reproduce my comment here as well, since this list probably has a more relevant audience:<u></u><u></u></p>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
<u></u> <u></u></p>
</div>
<blockquote style="margin-top:5pt;margin-bottom:5pt">
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
I want to start by saying that I think this sounds totally awesome, and I think it’s a fantastic idea. I’m really interested in seeing how this progresses!<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
<br>
I do wonder if people might find the name a little misleading. “Whole program compilation” usually implies “whole program optimization,” but most of GHC’s key optimizations happen at the Core level, before STG is even generated. (Of course, I’m sure you’re
well aware of that, I’m just stating it for the sake of others who might be reading who aren’t aware.)<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
<br>
This seems much closer in spirit to “link-time optimization” (LTO) as performed by Clang and GCC than whole program compilation. For example, Clang’s LTO works by “linking” LLVM bitcode files instead of fully-compiled native objects. STG is not quite analogous
to LLVM IR—GHC’s analog would be Cmm, not STG—but I think that difference is not that significant here: the STG-to-Cmm pass is quite mechanical, and STG is mostly just easier to manipulate.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
<br>
tl;dr: Have you considered naming this project GHC-LTO instead of GHC-WPC?<u></u><u></u></p>
</div>
</blockquote>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
<u></u> <u></u></p>
</div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
Alexis<u></u><u></u></p>
<div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
<br>
<br>
<u></u><u></u></p>
<blockquote style="margin-top:5pt;margin-bottom:5pt">
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
On Jun 12, 2020, at 16:16, Csaba Hruska <<a href="mailto:csaba.hruska@gmail.com" target="_blank">csaba.hruska@gmail.com</a>> wrote:<u></u><u></u></p>
</div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
<u></u> <u></u></p>
<div>
<div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
Hello,<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
<u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
I've created a whole program compilation pipeline for GHC via STG.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
Please read my blog post for the details:<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
<a href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.patreon.com%2Fposts%2Fintroducing-ghc-38173710&data=02%7C01%7Csimonpj%40microsoft.com%7Cd49efc7bddcb4e35d70808d81060fbbb%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637277356582131411&sdata=%2BF1KN0%2BuZbeW%2F3wTFOCvNVN9UWxY8wPhcahnN7Tx7DQ%3D&reserved=0" target="_blank">Introducing
GHC whole program compiler (GHC-WPC)</a><u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
<u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
Regards,<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-right:0cm;margin-bottom:6pt;margin-left:0cm">
Csaba Hruska<u></u><u></u></p>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote></div>