CMM-to-ASM: Register allocation wierdness
ben at smart-cactus.org
Sun Jun 19 08:33:53 UTC 2016
Harendra Kumar <harendra.kumar at gmail.com> writes:
> Thanks Ben! I have my responses inline below.
> On 16 June 2016 at 18:07, Ben Gamari <ben at smart-cactus.org> wrote:
>> For the record, I have also struggled with register spilling issues in
>> the past. See, for instance, #10012, which describes a behavior which
>> arises from the C-- sinking pass's unwillingness to duplicate code
>> across branches. While in general it's good to avoid the code bloat that
>> this duplication implies, in the case shown in that ticket duplicating
>> the computation would be significantly less code than the bloat from
>> spilling the needed results.
> Not sure if this is possible but when unsure we can try both and compare if
> the duplication results in significantly more code than no duplication and
> make a decision based on that. Though that will slow down the compilation.
> Maybe we can bundle slower passes in something like -O3, meaning it will be
> slow and may or may not provide better results?
Indeed this would be one option although I suspect we can do better.
I have discussed the problem with a few people and have some ideas on
how to proceed. Unfortunately I've been suffering from a chronic lack of
>> Very interesting, thanks for writing this down! Indeed if these checks
>> really are redundant then we should try to avoid them. Do you have any
>> code you could share that demosntrates this?
> I have the code to produce this CMM, I can commit it on a branch and leave
> it in the github repository so that we can use it for fixing.
Indeed it would be great if you could provide the program that produced
>> It would be great to open Trac tickets to track some of the optimization
> Will do.
>> Furthermore, there are a few annoying impedance mismatches between Cmm
>> and LLVM's representation. This can be seen in our treatment of proc
>> points: when we need to take the address of a block within a function
>> LLVM requires that we break the block into a separate procedure, hiding
>> many potential optimizations from the optimizer. This was discussed
>> further on this list earlier this year . It would be great to
>> eliminate proc-point splitting but doing so will almost certainly
>> require cooperation from LLVM.
> It sounds like we need to continue with both for now and see how the llvm
> option pans out. There is clearly no reason for a decisive tilt towards
> llvm in near future.
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 472 bytes
Desc: not available
More information about the ghc-devs