[GHC] #7580: Building PrimOps.cmm on OS X with LLVM 3.2 fails

GHC cvs-ghc at haskell.org
Mon Jan 14 12:56:52 CET 2013


#7580: Building PrimOps.cmm on OS X with LLVM 3.2 fails
---------------------------------+------------------------------------------
    Reporter:  thoughtpolice     |       Owner:  thoughtpolice      
        Type:  bug               |      Status:  new                
    Priority:  normal            |   Milestone:                     
   Component:  Compiler (LLVM)   |     Version:  7.7                
    Keywords:                    |          Os:  MacOS X            
Architecture:  Unknown/Multiple  |     Failure:  Building GHC failed
  Difficulty:  Unknown           |    Testcase:                     
   Blockedby:                    |    Blocking:                     
     Related:  #7571, #7575      |  
---------------------------------+------------------------------------------
Changes (by thoughtpolice):

  * owner:  dterei => thoughtpolice


Comment:

 I have fixed this bug. But I cannot quite answer your question.

 The bugfix: In a nutshell, if you look at that snipped I posted, the
 secret is the definition of ```exprToVar```:

 {{{
 exprToVar :: LlvmEnv -> CmmExpr -> UniqSM ExprData
 exprToVar env = exprToVarOpt env (wordOption (getDflags env))
 }}}

 The ```exprToVarOpt``` call essentially forces the return value of the
 expression. By default it is the word size of the build (64, here.) In
 this case, we detected that we needed to convert a i32 to an i64 (the ```w
 | w < toWidth``` branch.) So we execute ```sameConv'```, which invokes
 ```exprToVar```, which turns ```vx``` into an ```LlvmVar``` with a width
 of the *native* word size. Which in this case, is i64. So the check
 passes, but the variable we substitute for the coercion is of the wrong
 type. Annoying.

 The fix looks like this:

 {{{
 sameConv from ty reduce expand = do
             x'@(env', vx, stmts, top) <- exprToVarOpt env (EOption $ Just
 $ widthToLlvmInt from) x
             ...
 }}}

 I will create a patch and try to narrow down a test case. The patch I
 cooked up depends on #7571, which has a prerequisite patch and explanation
 of this same issue. I'll post it shortly.

 As for the cast, I am not precisely sure where it comes from. It comes
 from the ```MO_UU_Conv``` MachOp, which just does an unsigned -> unsigned
 conversion (optionally extending/truncating.) I don't know why it seems to
 be inserted for the store to ```Sp```.

 As for the loads/stores - isn't that natural? It looks fine to me. Before
 we pass code to ```opt```, the backend generates tons of loads and stores
 for all variable modification as opposed to using phi nodes for control
 flow. This is so that we can use the ```mem2reg``` pass in LLVM to
 essentially do the phi conversion for us, since we're essentially in CPS
 form at that point. The code generated by ```-ddump-llvm``` is of course
 before opt is run.

 Looking at the code, it seems pretty in line with the snippet of Cmm from
 the ```-ddump-cmm``` bit I posted, no?

-- 
Ticket URL: <http://hackage.haskell.org/trac/ghc/ticket/7580#comment:4>
GHC <http://www.haskell.org/ghc/>
The Glasgow Haskell Compiler



More information about the ghc-tickets mailing list