[Git][ghc/ghc][wip/marge_bot_batch_merge_job] 4 commits: LLVM: account for register type in funPrologue
Marge Bot (@marge-bot)
gitlab at gitlab.haskell.org
Tue Feb 25 14:31:41 UTC 2025
Marge Bot pushed to branch wip/marge_bot_batch_merge_job at Glasgow Haskell Compiler / GHC
Commits:
33aca30f by sheaf at 2025-02-25T08:58:46-05:00
LLVM: account for register type in funPrologue
We were not properly accounting for the live register type of
global registers in GHC.CmmToLlvm.CodeGen.funPrologue. This meant that
we could allocated a register at type <4 x i32> but try to write to it
at type <8 x i16>, which LLVM doesn't much like.
This patch fixes that by inserting intermerdiate casts when necessary.
Fixes #25730
- - - - -
0eb58b0e by sheaf at 2025-02-25T08:59:29-05:00
base: make Data.List.NonEmpty.unzip match Data.List
This commit makes Data.List.NonEmpty.unzip match the implementation
of Data.List, as was suggested in approved CLC proposal #107.
- - - - -
fad1736d by Matthew Pickering at 2025-02-25T09:31:11-05:00
interpreter: Fix underflow frame lookups
BCOs can be nested, resulting in nested BCO stack frames where the inner most
stack frame can refer to variables stored on earlier stack frames via the
PUSH_L instruction.
|---------|
| BCO_1 | -<-┐
|---------|
......... |
|---------| | PUSH_L <n>
| BCO_N | ->-┘
|---------|
Here BCO_N is syntactically nested within the code for BCO_1 and will result
in code that references the prior stack frame of BCO_1 for some of it's local
variables. If a stack overflow happens between the creation of the stack frame
for BCO_1 and BCO_N the RTS might move BCO_N to a new stack chunk while leaving
BCO_1 in place, invalidating a simple offset based reference to the outer stack
frames.
Therefore `ReadSpW` first performs a bounds check to ensure that accesses onto
the stack will succeed. If the target address would not be a valid location for
the current stack chunk then `slow_spw` function is called, which dereferences
the underflow frame to adjust the offset before performing the lookup.
┌->--x | CHK_1 |
| CHK_2 | | | |---------|
|---------| | └-> | BCO_1 |
| UD_FLOW | -- x |---------|
|---------| |
| ...... | |
|---------| | PUSH_L <n>
| BCO_ N | ->-┘
|---------|
Fixes #25750
- - - - -
fbd05651 by Vladislav Zavialov at 2025-02-25T09:31:12-05:00
Remove ArgPatBuilder
ArgPatBuilder in Parser/PostProcess.hs became redundant with the
introduction of InvisPat (36a75b80eb).
This small refactoring removes it.
- - - - -
11 changed files:
- compiler/GHC/Cmm/Reg.hs
- compiler/GHC/CmmToLlvm/Base.hs
- compiler/GHC/CmmToLlvm/CodeGen.hs
- compiler/GHC/Llvm/Types.hs
- compiler/GHC/Parser/PostProcess.hs
- libraries/base/src/Data/List/NonEmpty.hs
- rts/Interpreter.c
- + testsuite/tests/llvm/should_run/T25730.hs
- + testsuite/tests/llvm/should_run/T25730.stdout
- + testsuite/tests/llvm/should_run/T25730C.c
- testsuite/tests/llvm/should_run/all.T
Changes:
=====================================
compiler/GHC/Cmm/Reg.hs
=====================================
@@ -98,8 +98,8 @@ instance Outputable CmmReg where
pprReg :: CmmReg -> SDoc
pprReg r
= case r of
- CmmLocal local -> pprLocalReg local
- CmmGlobal (GlobalRegUse global _ty) -> pprGlobalReg global
+ CmmLocal local -> pprLocalReg local
+ CmmGlobal (GlobalRegUse global _) -> pprGlobalReg global
cmmRegType :: CmmReg -> CmmType
cmmRegType (CmmLocal reg) = localRegType reg
=====================================
compiler/GHC/CmmToLlvm/Base.hs
=====================================
@@ -290,7 +290,7 @@ data LlvmEnv = LlvmEnv
-- the following get cleared for every function (see @withClearVars@)
, envVarMap :: LlvmEnvMap -- ^ Local variables so far, with type
- , envStackRegs :: [GlobalReg] -- ^ Non-constant registers (alloca'd in the function prelude)
+ , envStackRegs :: [GlobalRegUse] -- ^ Non-constant registers (alloca'd in the function prelude)
}
type LlvmEnvMap = UniqFM Unique LlvmType
@@ -374,12 +374,14 @@ varLookup s = getEnv (flip lookupUFM (getUnique s) . envVarMap)
funLookup s = getEnv (flip lookupUFM (getUnique s) . envFunMap)
-- | Set a register as allocated on the stack
-markStackReg :: GlobalReg -> LlvmM ()
+markStackReg :: GlobalRegUse -> LlvmM ()
markStackReg r = modifyEnv $ \env -> env { envStackRegs = r : envStackRegs env }
-- | Check whether a register is allocated on the stack
-checkStackReg :: GlobalReg -> LlvmM Bool
-checkStackReg r = getEnv ((elem r) . envStackRegs)
+checkStackReg :: GlobalReg -> LlvmM (Maybe CmmType)
+checkStackReg r = do
+ stack_regs <- getEnv envStackRegs
+ return $ fmap globalRegUse_type $ lookupRegUse r stack_regs
-- | Allocate a new global unnamed metadata identifier
getMetaUniqueId :: LlvmM MetaId
=====================================
compiler/GHC/CmmToLlvm/CodeGen.hs
=====================================
@@ -47,7 +47,7 @@ import Data.Foldable ( toList )
import Data.List ( nub )
import qualified Data.List as List
import Data.List.NonEmpty ( NonEmpty (..), nonEmpty )
-import Data.Maybe ( catMaybes, isJust )
+import Data.Maybe ( catMaybes )
type Atomic = Maybe MemoryOrdering
type LlvmStatements = OrdList LlvmStatement
@@ -202,9 +202,8 @@ genCall (PrimTarget MO_Touch) _ _ =
return (nilOL, [])
genCall (PrimTarget (MO_UF_Conv w)) [dst] [e] = runStmtsDecls $ do
- dstV <- getCmmRegW (CmmLocal dst)
- let ty = cmmToLlvmType $ localRegType dst
- width = widthToLlvmFloat w
+ (dstV, ty) <- getCmmRegW (CmmLocal dst)
+ let width = widthToLlvmFloat w
castV <- lift $ mkLocalVar ty
ve <- exprToVarW e
statement $ Assignment castV $ Cast LM_Uitofp ve width
@@ -255,7 +254,7 @@ genCall (PrimTarget (MO_AtomicRMW width amop)) [dst] [addr, n] = runStmtsDecls $
let targetTy = widthToLlvmInt width
ptrExpr = Cast LM_Inttoptr addrVar (pLift targetTy)
ptrVar <- doExprW (pLift targetTy) ptrExpr
- dstVar <- getCmmRegW (CmmLocal dst)
+ (dstVar, _dst_ty) <- getCmmRegW (CmmLocal dst)
let op = case amop of
AMO_Add -> LAO_Add
AMO_Sub -> LAO_Sub
@@ -267,7 +266,7 @@ genCall (PrimTarget (MO_AtomicRMW width amop)) [dst] [addr, n] = runStmtsDecls $
statement $ Store retVar dstVar Nothing []
genCall (PrimTarget (MO_AtomicRead _ mem_ord)) [dst] [addr] = runStmtsDecls $ do
- dstV <- getCmmRegW (CmmLocal dst)
+ (dstV, _dst_ty) <- getCmmRegW (CmmLocal dst)
v1 <- genLoadW (Just mem_ord) addr (localRegType dst) NaturallyAligned
statement $ Store v1 dstV Nothing []
@@ -279,14 +278,14 @@ genCall (PrimTarget (MO_Cmpxchg _width))
let targetTy = getVarType oldVar
ptrExpr = Cast LM_Inttoptr addrVar (pLift targetTy)
ptrVar <- doExprW (pLift targetTy) ptrExpr
- dstVar <- getCmmRegW (CmmLocal dst)
+ (dstVar, _dst_ty) <- getCmmRegW (CmmLocal dst)
retVar <- doExprW (LMStructU [targetTy,i1])
$ CmpXChg ptrVar oldVar newVar SyncSeqCst SyncSeqCst
retVar' <- doExprW targetTy $ ExtractV retVar 0
statement $ Store retVar' dstVar Nothing []
genCall (PrimTarget (MO_Xchg _width)) [dst] [addr, val] = runStmtsDecls $ do
- dstV <- getCmmRegW (CmmLocal dst) :: WriterT LlvmAccum LlvmM LlvmVar
+ (dstV, _dst_ty) <- getCmmRegW (CmmLocal dst)
addrVar <- exprToVarW addr
valVar <- exprToVarW val
let ptrTy = pLift $ getVarType valVar
@@ -352,8 +351,8 @@ genCall (PrimTarget (MO_U_Mul2 w)) [dstH, dstL] [lhs, rhs] = runStmtsDecls $ do
retShifted <- doExprW width2x $ LlvmOp LM_MO_LShr retV widthLlvmLit
-- And extract them into retH.
retH <- doExprW width $ Cast LM_Trunc retShifted width
- dstRegL <- getCmmRegW (CmmLocal dstL)
- dstRegH <- getCmmRegW (CmmLocal dstH)
+ (dstRegL, _dstL_ty) <- getCmmRegW (CmmLocal dstL)
+ (dstRegH, _dstH_ty) <- getCmmRegW (CmmLocal dstH)
statement $ Store retL dstRegL Nothing []
statement $ Store retH dstRegH Nothing []
@@ -383,9 +382,9 @@ genCall (PrimTarget (MO_S_Mul2 w)) [dstC, dstH, dstL] [lhs, rhs] = runStmtsDecls
retH' <- doExprW width $ LlvmOp LM_MO_AShr retL widthLlvmLitm1
retC1 <- doExprW i1 $ Compare LM_CMP_Ne retH retH' -- Compare op returns a 1-bit value (i1)
retC <- doExprW width $ Cast LM_Zext retC1 width -- so we zero-extend it
- dstRegL <- getCmmRegW (CmmLocal dstL)
- dstRegH <- getCmmRegW (CmmLocal dstH)
- dstRegC <- getCmmRegW (CmmLocal dstC)
+ (dstRegL, _dstL_ty) <- getCmmRegW (CmmLocal dstL)
+ (dstRegH, _dstH_ty) <- getCmmRegW (CmmLocal dstH)
+ (dstRegC, _dstC_ty) <- getCmmRegW (CmmLocal dstC)
statement $ Store retL dstRegL Nothing []
statement $ Store retH dstRegH Nothing []
statement $ Store retC dstRegC Nothing []
@@ -420,8 +419,8 @@ genCall (PrimTarget (MO_U_QuotRem2 w))
let narrow var = doExprW width $ Cast LM_Trunc var width
retDiv <- narrow retExtDiv
retRem <- narrow retExtRem
- dstRegQ <- lift $ getCmmReg (CmmLocal dstQ)
- dstRegR <- lift $ getCmmReg (CmmLocal dstR)
+ (dstRegQ, _dstQ_ty) <- lift $ getCmmReg (CmmLocal dstQ)
+ (dstRegR, _dstR_ty) <- lift $ getCmmReg (CmmLocal dstR)
statement $ Store retDiv dstRegQ Nothing []
statement $ Store retRem dstRegR Nothing []
@@ -504,7 +503,6 @@ genCall target res args = do
let funTy = \name -> LMFunction $ LlvmFunctionDecl name ExternallyVisible
lmconv retTy FixedArgs argTy (llvmFunAlign platform)
-
argVars <- arg_varsW args_hints ([], nilOL, [])
fptr <- getFunPtrW funTy target
@@ -524,23 +522,21 @@ genCall target res args = do
ret_reg t = panic $ "genCall: Bad number of registers! Can only handle"
++ " 1, given " ++ show (length t) ++ "."
let creg = ret_reg res
- vreg <- getCmmRegW (CmmLocal creg)
- if retTy == pLower (getVarType vreg)
- then do
- statement $ Store v1 vreg Nothing []
- doReturn
- else do
- let ty = pLower $ getVarType vreg
- let op = case ty of
- vt | isPointer vt -> LM_Bitcast
- | isInt vt -> LM_Ptrtoint
- | otherwise ->
- panic $ "genCall: CmmReg bad match for"
- ++ " returned type!"
-
- v2 <- doExprW ty $ Cast op v1 ty
- statement $ Store v2 vreg Nothing []
- doReturn
+ (vreg, ty) <- getCmmRegW (CmmLocal creg)
+ if retTy == ty
+ then do
+ statement $ Store v1 vreg Nothing []
+ doReturn
+ else do
+ let op = case ty of
+ vt | isPointer vt -> LM_Bitcast
+ | isInt vt -> LM_Ptrtoint
+ | otherwise ->
+ panic $ "genCall: CmmReg bad match for"
+ ++ " returned type!"
+ v2 <- doExprW ty $ Cast op v1 ty
+ statement $ Store v2 vreg Nothing []
+ doReturn
-- | Generate a call to an LLVM intrinsic that performs arithmetic operation
-- with overflow bit (i.e., returns a struct containing the actual result of the
@@ -566,8 +562,8 @@ genCallWithOverflow t@(PrimTarget op) w [dstV, dstO] [lhs, rhs] = do
-- value is i<width>, but overflowBit is i1, so we need to cast (Cmm expects
-- both to be i<width>)
(overflow, zext) <- doExpr width $ Cast LM_Zext overflowBit width
- dstRegV <- getCmmReg (CmmLocal dstV)
- dstRegO <- getCmmReg (CmmLocal dstO)
+ (dstRegV, _dstV_ty) <- getCmmReg (CmmLocal dstV)
+ (dstRegO, _dstO_ty) <- getCmmReg (CmmLocal dstO)
let storeV = Store value dstRegV Nothing []
storeO = Store overflow dstRegO Nothing []
return (stmts `snocOL` zext `snocOL` storeV `snocOL` storeO, top)
@@ -625,7 +621,7 @@ genCallSimpleCast w t@(PrimTarget op) [dst] args = do
fname <- cmmPrimOpFunctions op
(fptr, _, top3) <- getInstrinct fname width [width]
- dstV <- getCmmReg (CmmLocal dst)
+ (dstV, _dst_ty) <- getCmmReg (CmmLocal dst)
let (_, arg_hints) = foreignTargetHints t
let args_hints = zip args arg_hints
@@ -657,7 +653,7 @@ genCallSimpleCast2 w t@(PrimTarget op) [dst] args = do
fname <- cmmPrimOpFunctions op
(fptr, _, top3) <- getInstrinct fname width (const width <$> args)
- dstV <- getCmmReg (CmmLocal dst)
+ (dstV, _dst_ty) <- getCmmReg (CmmLocal dst)
let (_, arg_hints) = foreignTargetHints t
let args_hints = zip args arg_hints
@@ -1089,11 +1085,9 @@ genJump expr live = do
-- these with registers when possible.
genAssign :: CmmReg -> CmmExpr -> LlvmM StmtData
genAssign reg val = do
- vreg <- getCmmReg reg
+ (vreg, ty) <- getCmmReg reg
(vval, stmts2, top2) <- exprToVar val
let stmts = stmts2
-
- let ty = (pLower . getVarType) vreg
platform <- getPlatform
case ty of
-- Some registers are pointer types, so need to cast value to pointer
@@ -2047,42 +2041,58 @@ mkLoad atomic vptr alignment
-- | Handle CmmReg expression. This will return a pointer to the stack
-- location of the register. Throws an error if it isn't allocated on
-- the stack.
-getCmmReg :: CmmReg -> LlvmM LlvmVar
+getCmmReg :: CmmReg -> LlvmM (LlvmVar, LlvmType)
getCmmReg (CmmLocal (LocalReg un _))
= do exists <- varLookup un
case exists of
- Just ety -> return (LMLocalVar un $ pLift ety)
+ Just ety -> return (LMLocalVar un $ pLift ety, ety)
Nothing -> pprPanic "getCmmReg: Cmm register " $
ppr un <> text " was not allocated!"
-- This should never happen, as every local variable should
-- have been assigned a value at some point, triggering
-- "funPrologue" to allocate it on the stack.
-getCmmReg (CmmGlobal ru@(GlobalRegUse r _))
- = do onStack <- checkStackReg r
+getCmmReg (CmmGlobal (GlobalRegUse reg _reg_ty))
+ = do onStack <- checkStackReg reg
platform <- getPlatform
- if onStack
- then return (lmGlobalRegVar platform ru)
- else pprPanic "getCmmReg: Cmm register " $
- ppr r <> text " not stack-allocated!"
+ case onStack of
+ Just stack_ty -> do
+ let var = lmGlobalRegVar platform (GlobalRegUse reg stack_ty)
+ return (var, pLower $ getVarType var)
+ Nothing ->
+ pprPanic "getCmmReg: Cmm register " $
+ ppr reg <> text " not stack-allocated!"
-- | Return the value of a given register, as well as its type. Might
-- need to be load from stack.
getCmmRegVal :: CmmReg -> LlvmM (LlvmVar, LlvmType, LlvmStatements)
getCmmRegVal reg =
case reg of
- CmmGlobal g -> do
- onStack <- checkStackReg (globalRegUse_reg g)
+ CmmGlobal gu@(GlobalRegUse g _) -> do
+ onStack <- checkStackReg g
platform <- getPlatform
- if onStack then loadFromStack else do
- let r = lmGlobalRegArg platform g
- return (r, getVarType r, nilOL)
+ case onStack of
+ Just {} ->
+ loadFromStack
+ Nothing -> do
+ let r = lmGlobalRegArg platform gu
+ return (r, getVarType r, nilOL)
_ -> loadFromStack
- where loadFromStack = do
- ptr <- getCmmReg reg
- let ty = pLower $ getVarType ptr
- (v, s) <- doExpr ty (Load ptr Nothing)
- return (v, ty, unitOL s)
+ where
+ loadFromStack = do
+ platform <- getPlatform
+ (ptr, stack_reg_ty) <- getCmmReg reg
+ let reg_ty = case reg of
+ CmmGlobal g -> pLower $ getVarType $ lmGlobalRegVar platform g
+ CmmLocal {} -> stack_reg_ty
+ if reg_ty /= stack_reg_ty
+ then do
+ (v1, s1) <- doExpr stack_reg_ty (Load ptr Nothing)
+ (v2, s2) <- doExpr reg_ty (Cast LM_Bitcast v1 reg_ty)
+ return (v2, reg_ty, toOL [s1, s2])
+ else do
+ (v, s) <- doExpr reg_ty (Load ptr Nothing)
+ return (v, reg_ty, unitOL s)
-- | Allocate a local CmmReg on the stack
allocReg :: CmmReg -> (LlvmVar, LlvmStatements)
@@ -2215,15 +2225,29 @@ funPrologue live cmmBlocks = do
let (newv, stmts) = allocReg reg
varInsert un (pLower $ getVarType newv)
return stmts
- CmmGlobal ru@(GlobalRegUse r _) -> do
+ CmmGlobal ru@(GlobalRegUse r ty0) -> do
let reg = lmGlobalRegVar platform ru
- arg = lmGlobalRegArg platform ru
ty = (pLower . getVarType) reg
trash = LMLitVar $ LMUndefLit ty
- rval = if isJust (mbLive r) then arg else trash
+ rval = case mbLive r of
+ Just (GlobalRegUse _ ty') ->
+ lmGlobalRegArg platform (GlobalRegUse r ty')
+ _ -> trash
alloc = Assignment reg $ Alloca (pLower $ getVarType reg) 1
- markStackReg r
- return $ toOL [alloc, Store rval reg Nothing []]
+ markStackReg ru
+ case mbLive r of
+ Just (GlobalRegUse _ ty')
+ | let llvm_ty = cmmToLlvmType ty0
+ llvm_ty' = cmmToLlvmType ty'
+ , llvm_ty /= llvm_ty'
+ -> do castV <- mkLocalVar (pLift llvm_ty')
+ return $
+ toOL [ alloc
+ , Assignment castV $ Cast LM_Bitcast reg (pLift llvm_ty')
+ , Store rval castV Nothing []
+ ]
+ _ ->
+ return $ toOL [alloc, Store rval reg Nothing []]
return (concatOL stmtss `snocOL` jumpToEntry, [])
where
@@ -2387,7 +2411,7 @@ runStmtsDecls action = do
LlvmAccum stmts decls <- execWriterT action
return (stmts, decls)
-getCmmRegW :: CmmReg -> WriterT LlvmAccum LlvmM LlvmVar
+getCmmRegW :: CmmReg -> WriterT LlvmAccum LlvmM (LlvmVar, LlvmType)
getCmmRegW = lift . getCmmReg
genLoadW :: Atomic -> CmmExpr -> CmmType -> AlignmentSpec -> WriterT LlvmAccum LlvmM LlvmVar
=====================================
compiler/GHC/Llvm/Types.hs
=====================================
@@ -239,7 +239,7 @@ pVarLift :: LlvmVar -> LlvmVar
pVarLift (LMGlobalVar s t l x a c) = LMGlobalVar s (pLift t) l x a c
pVarLift (LMLocalVar s t ) = LMLocalVar s (pLift t)
pVarLift (LMNLocalVar s t ) = LMNLocalVar s (pLift t)
-pVarLift (LMLitVar _ ) = error $ "Can't lower a literal type!"
+pVarLift (LMLitVar _ ) = error $ "Can't lift a literal type!"
-- | Remove the pointer indirection of the supplied type. Only 'LMPointer'
-- constructors can be lowered.
=====================================
compiler/GHC/Parser/PostProcess.hs
=====================================
@@ -1257,10 +1257,6 @@ checkPattern = runPV . checkLPat
checkPattern_details :: ParseContext -> PV (LocatedA (PatBuilder GhcPs)) -> P (LPat GhcPs)
checkPattern_details extraDetails pp = runPV_details extraDetails (pp >>= checkLPat)
-checkLArgPat :: LocatedA (ArgPatBuilder GhcPs) -> PV (LPat GhcPs)
-checkLArgPat (L l (ArgPatBuilderVisPat p)) = checkLPat (L l p)
-checkLArgPat (L l (ArgPatBuilderArgPat p)) = return (L l p)
-
checkLPat :: LocatedA (PatBuilder GhcPs) -> PV (LPat GhcPs)
checkLPat (L l@(EpAnn anc an _) p) = do
(L l' p', cs) <- checkPat (EpAnn anc an emptyComments) emptyComments (L l p) [] []
@@ -1398,11 +1394,11 @@ checkFunBind :: SrcSpan
-> AnnFunRhs
-> LocatedN RdrName
-> LexicalFixity
- -> LocatedE [LocatedA (ArgPatBuilder GhcPs)]
+ -> LocatedE [LocatedA (PatBuilder GhcPs)]
-> Located (GRHSs GhcPs (LHsExpr GhcPs))
-> P (HsBind GhcPs)
checkFunBind locF ann_fun (L lf fun) is_infix (L lp pats) (L _ grhss)
- = do ps <- runPV_details extraDetails (mapM checkLArgPat pats)
+ = do ps <- runPV_details extraDetails (mapM checkLPat pats)
let match_span = noAnnSrcSpan $ locF
return (makeFunBind (L (l2l lf) fun) (L (noAnnSrcSpan $ locA match_span)
[L match_span (Match { m_ext = noExtField
@@ -1483,20 +1479,18 @@ checkDoAndIfThenElse err guardExpr semiThen thenExpr semiElse elseExpr
isFunLhs :: LocatedA (PatBuilder GhcPs)
-> P (Maybe (LocatedN RdrName, LexicalFixity,
- [LocatedA (ArgPatBuilder GhcPs)],[EpToken "("],[EpToken ")"]))
+ [LocatedA (PatBuilder GhcPs)],[EpToken "("],[EpToken ")"]))
-- A variable binding is parsed as a FunBind.
-- Just (fun, is_infix, arg_pats) if e is a function LHS
isFunLhs e = go e [] [] []
where
- mk = fmap ArgPatBuilderVisPat
-
go (L l (PatBuilderVar (L loc f))) es ops cps
| not (isRdrDataCon f) = do
let (_l, loc') = transferCommentsOnlyA l loc
return (Just (L loc' f, Prefix, es, (reverse ops), cps))
go (L l (PatBuilderApp (L lf f) e)) es ops cps = do
let (_l, lf') = transferCommentsOnlyA l lf
- go (L lf' f) (mk e:es) ops cps
+ go (L lf' f) (e:es) ops cps
go (L l (PatBuilderPar _ (L le e) _)) es@(_:_) ops cps = go (L le' e) es (o:ops) (c:cps)
-- NB: es@(_:_) means that there must be an arg after the parens for the
-- LHS to be a function LHS. This corresponds to the Haskell Report's definition
@@ -1507,33 +1501,25 @@ isFunLhs e = go e [] [] []
go (L loc (PatBuilderOpApp (L ll l) (L loc' op) r (os,cs))) es ops cps
| not (isRdrDataCon op) -- We have found the function!
= do { let (_l, ll') = transferCommentsOnlyA loc ll
- ; return (Just (L loc' op, Infix, (mk (L ll' l):mk r:es), (os ++ reverse ops), (cs ++ cps))) }
+ ; return (Just (L loc' op, Infix, ((L ll' l):r:es), (os ++ reverse ops), (cs ++ cps))) }
| otherwise -- Infix data con; keep going
= do { let (_l, ll') = transferCommentsOnlyA loc ll
; mb_l <- go (L ll' l) es ops cps
; return (reassociate =<< mb_l) }
where
- reassociate (op', Infix, j : L k_loc (ArgPatBuilderVisPat k) : es', ops', cps')
+ reassociate (op', Infix, j : L k_loc k : es', ops', cps')
= Just (op', Infix, j : op_app : es', ops', cps')
where
- op_app = mk $ L loc (PatBuilderOpApp (L k_loc k)
+ op_app = L loc (PatBuilderOpApp (L k_loc k)
(L loc' op) r (reverse ops, cps))
reassociate _other = Nothing
go (L l (PatBuilderAppType (L lp pat) tok ty_pat@(HsTP _ (L (EpAnn anc ann cs) _)))) es ops cps
- = go (L lp' pat) (L (EpAnn anc' ann cs) (ArgPatBuilderArgPat invis_pat) : es) ops cps
+ = go (L lp' pat) (L (EpAnn anc' ann cs) (PatBuilderPat invis_pat) : es) ops cps
where invis_pat = InvisPat (tok, SpecifiedSpec) ty_pat
anc' = widenAnchorT anc tok
(_l, lp') = transferCommentsOnlyA l lp
go _ _ _ _ = return Nothing
-data ArgPatBuilder p
- = ArgPatBuilderVisPat (PatBuilder p)
- | ArgPatBuilderArgPat (Pat p)
-
-instance Outputable (ArgPatBuilder GhcPs) where
- ppr (ArgPatBuilderVisPat p) = ppr p
- ppr (ArgPatBuilderArgPat p) = ppr p
-
mkBangTy :: EpaLocation -> SrcStrictness -> LHsType GhcPs -> HsType GhcPs
mkBangTy tok_loc strictness =
HsBangTy ((noAnn, noAnn, tok_loc), NoSourceText) (HsBang NoSrcUnpack strictness)
=====================================
libraries/base/src/Data/List/NonEmpty.hs
=====================================
@@ -544,7 +544,9 @@ infixl 9 !!
-- | The 'unzip' function is the inverse of the 'zip' function.
unzip :: NonEmpty (a, b) -> (NonEmpty a, NonEmpty b)
-unzip xs = (fst <$> xs, snd <$> xs)
+unzip ((a, b) :| asbs) = (a :| as, b :| bs)
+ where
+ (as, bs) = List.unzip asbs
-- | The 'nub' function removes duplicate elements from a list. In
-- particular, it keeps only the first occurrence of each element.
=====================================
rts/Interpreter.c
=====================================
@@ -171,6 +171,54 @@ tag functions as tag inference currently doesn't rely on those being properly ta
#define SpW(n) (*(StgWord*)(Sp_plusW(n)))
#define SpB(n) (*(StgWord*)(Sp_plusB(n)))
+#define WITHIN_CAP_CHUNK_BOUNDS(n) WITHIN_CHUNK_BOUNDS(n, cap->r.rCurrentTSO->stackobj)
+
+#define WITHIN_CHUNK_BOUNDS(n, s) \
+ (RTS_LIKELY((StgWord*)(Sp_plusW(n)) < ((s)->stack + (s)->stack_size - sizeofW(StgUnderflowFrame))))
+
+
+/* Note [PUSH_L underflow]
+ ~~~~~~~~~~~~~~~~~~~~~~~
+BCOs can be nested, resulting in nested BCO stack frames where the inner most
+stack frame can refer to variables stored on earlier stack frames via the
+PUSH_L instruction.
+
+|---------|
+| BCO_1 | -<-┐
+|---------|
+ ......... |
+|---------| | PUSH_L <n>
+| BCO_N | ->-┘
+|---------|
+
+Here BCO_N is syntactically nested within the code for BCO_1 and will result
+in code that references the prior stack frame of BCO_1 for some of it's local
+variables. If a stack overflow happens between the creation of the stack frame
+for BCO_1 and BCO_N the RTS might move BCO_N to a new stack chunk while leaving
+BCO_1 in place, invalidating a simple offset based reference to the outer stack
+frames.
+Therefore `ReadSpW` first performs a bounds check to ensure that accesses onto
+the stack will succeed. If the target address would not be a valid location for
+the current stack chunk then `slow_spw` function is called, which dereferences
+the underflow frame to adjust the offset before performing the lookup.
+
+ ┌->--x | CHK_1 |
+| CHK_2 | | | |---------|
+|---------| | └-> | BCO_1 |
+| UD_FLOW | -- x |---------|
+|---------| |
+| ...... | |
+|---------| | PUSH_L <n>
+| BCO_ N | ->-┘
+|---------|
+See ticket #25750
+
+*/
+
+#define ReadSpW(n) \
+ ((WITHIN_CAP_CHUNK_BOUNDS(n)) ? SpW(n): slow_spw(Sp, cap->r.rCurrentTSO->stackobj, n))
+
+
STATIC_INLINE StgPtr
allocate_NONUPD (Capability *cap, int n_words)
{
@@ -193,6 +241,8 @@ unsigned long it_retto_BCO;
unsigned long it_retto_UPDATE;
unsigned long it_retto_other;
+unsigned long it_underflow_lookups;
+
unsigned long it_slides;
unsigned long it_insns;
unsigned long it_BCO_entries;
@@ -209,6 +259,7 @@ void interp_startup ( void )
int i, j;
it_retto_BCO = it_retto_UPDATE = it_retto_other = 0;
it_total_entries = it_total_unknown_entries = 0;
+ it_underflow_lookups = 0;
for (i = 0; i < N_CLOSURE_TYPES; i++)
it_unknown_entries[i] = 0;
it_slides = it_insns = it_BCO_entries = 0;
@@ -229,6 +280,7 @@ void interp_shutdown ( void )
it_retto_BCO, it_retto_UPDATE, it_retto_other );
debugBelch("%lu total entries, %lu unknown entries \n",
it_total_entries, it_total_unknown_entries);
+ debugBelch("%lu lookups past the end of the stack frame\n", it_underflow_lookups);
for (i = 0; i < N_CLOSURE_TYPES; i++) {
if (it_unknown_entries[i] == 0) continue;
debugBelch(" type %2d: unknown entries (%4.1f%%) == %lu\n",
@@ -320,6 +372,53 @@ StgClosure * copyPAP (Capability *cap, StgPAP *oldpap)
#endif
+// See Note [PUSH_L underflow] for in which situations this
+// slow lookup is needed
+static StgWord
+slow_spw(void *Sp, StgStack *cur_stack, StgWord offset){
+ // 1. If in range, access the item from the current stack chunk
+ if (WITHIN_CHUNK_BOUNDS(offset, cur_stack)) {
+ return SpW(offset);
+ }
+ // 2. Not in this stack chunk, so access the underflow frame.
+ else {
+ StgWord stackWords;
+ StgUnderflowFrame *frame;
+ StgStack *new_stack;
+
+ frame = (StgUnderflowFrame*)(cur_stack->stack + cur_stack->stack_size
+ - sizeofW(StgUnderflowFrame));
+
+ // 2a. Check it is an underflow frame (the top stack chunk won't have one).
+ if( frame->info == &stg_stack_underflow_frame_d_info
+ || frame->info == &stg_stack_underflow_frame_v16_info
+ || frame->info == &stg_stack_underflow_frame_v32_info
+ || frame->info == &stg_stack_underflow_frame_v64_info )
+ {
+
+ INTERP_TICK(it_underflow_lookups);
+
+ new_stack = (StgStack*)frame->next_chunk;
+
+ // How many words were on the stack
+ stackWords = (StgWord *)frame - (StgWord *) Sp;
+ ASSERT(offset > stackWords);
+
+ // Recursive, in the very unlikely case we have to traverse two
+ // stack chunks.
+ return slow_spw(new_stack->sp, new_stack, offset-stackWords);
+ }
+ // 2b. Access the element if there is no underflow frame, it must be right
+ // at the top of the stack.
+ else {
+ // Not actually in the underflow case
+ return SpW(offset);
+ }
+
+ }
+
+}
+
// Compute the pointer tag for the constructor and tag the pointer;
// see Note [Data constructor dynamic tags] in GHC.StgToCmm.Closure.
//
@@ -401,7 +500,7 @@ interpretBCO (Capability* cap)
// +---------------+
//
else if (SpW(0) == (W_)&stg_apply_interp_info) {
- obj = UNTAG_CLOSURE((StgClosure *)SpW(1));
+ obj = UNTAG_CLOSURE((StgClosure *)ReadSpW(1));
Sp_addW(2);
goto run_BCO_fun;
}
@@ -413,7 +512,7 @@ interpretBCO (Capability* cap)
// do_return_pointer, below.
//
else if (SpW(0) == (W_)&stg_ret_p_info) {
- tagged_obj = (StgClosure *)SpW(1);
+ tagged_obj = (StgClosure *)ReadSpW(1);
Sp_addW(2);
goto do_return_pointer;
}
@@ -429,7 +528,7 @@ interpretBCO (Capability* cap)
// Evaluate the object on top of the stack.
eval:
- tagged_obj = (StgClosure*)SpW(0); Sp_addW(1);
+ tagged_obj = (StgClosure*)ReadSpW(0); Sp_addW(1);
eval_obj:
obj = UNTAG_CLOSURE(tagged_obj);
@@ -630,7 +729,7 @@ do_return_pointer:
info == (StgInfoTable *)&stg_restore_cccs_v32_info ||
info == (StgInfoTable *)&stg_restore_cccs_v64_info ||
info == (StgInfoTable *)&stg_restore_cccs_eval_info) {
- cap->r.rCCCS = (CostCentreStack*)SpW(1);
+ cap->r.rCCCS = (CostCentreStack*)ReadSpW(1);
Sp_addW(2);
goto do_return_pointer;
}
@@ -694,7 +793,7 @@ do_return_pointer:
INTERP_TICK(it_retto_BCO);
Sp_subW(1);
SpW(0) = (W_)tagged_obj;
- obj = (StgClosure*)SpW(2);
+ obj = (StgClosure*)ReadSpW(2);
ASSERT(get_itbl(obj)->type == BCO);
goto run_BCO_return_pointer;
@@ -741,12 +840,12 @@ do_return_nonpointer:
{
int offset;
- ASSERT( SpW(0) == (W_)&stg_ret_v_info
- || SpW(0) == (W_)&stg_ret_n_info
- || SpW(0) == (W_)&stg_ret_f_info
- || SpW(0) == (W_)&stg_ret_d_info
- || SpW(0) == (W_)&stg_ret_l_info
- || SpW(0) == (W_)&stg_ret_t_info
+ ASSERT( ReadSpW(0) == (W_)&stg_ret_v_info
+ || ReadSpW(0) == (W_)&stg_ret_n_info
+ || ReadSpW(0) == (W_)&stg_ret_f_info
+ || ReadSpW(0) == (W_)&stg_ret_d_info
+ || ReadSpW(0) == (W_)&stg_ret_l_info
+ || ReadSpW(0) == (W_)&stg_ret_t_info
);
IF_DEBUG(interpreter,
@@ -773,7 +872,7 @@ do_return_nonpointer:
// so the returned value is at the top of the stack, and start
// executing the BCO.
INTERP_TICK(it_retto_BCO);
- obj = (StgClosure*)SpW(offset+1);
+ obj = (StgClosure*)ReadSpW(offset+1);
ASSERT(get_itbl(obj)->type == BCO);
goto run_BCO_return_nonpointer;
@@ -835,7 +934,7 @@ do_apply:
// Shuffle the args for this function down, and put
// the appropriate info table in the gap.
for (i = 0; i < arity; i++) {
- SpW((int)i-1) = SpW(i);
+ SpW((int)i-1) = ReadSpW(i);
// ^^^^^ careful, i-1 might be negative, but i is unsigned
}
SpW(arity-1) = app_ptrs_itbl[n-arity-1];
@@ -874,7 +973,7 @@ do_apply:
new_pap->payload[i] = pap->payload[i];
}
for (i = 0; i < m; i++) {
- new_pap->payload[pap->n_args + i] = (StgClosure *)SpW(i);
+ new_pap->payload[pap->n_args + i] = (StgClosure *)ReadSpW(i);
}
// No write barrier is needed here as this is a new allocation
SET_HDR(new_pap,&stg_PAP_info,cap->r.rCCCS);
@@ -898,7 +997,7 @@ do_apply:
// Shuffle the args for this function down, and put
// the appropriate info table in the gap.
for (i = 0; i < arity; i++) {
- SpW((int)i-1) = SpW(i);
+ SpW((int)i-1) = ReadSpW(i);
// ^^^^^ careful, i-1 might be negative, but i is unsigned
}
SpW(arity-1) = app_ptrs_itbl[n-arity-1];
@@ -917,7 +1016,7 @@ do_apply:
pap->fun = obj;
pap->n_args = m;
for (i = 0; i < m; i++) {
- pap->payload[i] = (StgClosure *)SpW(i);
+ pap->payload[i] = (StgClosure *)ReadSpW(i);
}
// No write barrier is needed here as this is a new allocation
SET_HDR(pap, &stg_PAP_info,cap->r.rCCCS);
@@ -1034,7 +1133,7 @@ run_BCO_return_nonpointer:
*/
if(SpW(0) == (W_)&stg_ret_t_info) {
- cap->r.rCCCS = (CostCentreStack*)SpW(stack_frame_sizeW((StgClosure *)Sp) + 4);
+ cap->r.rCCCS = (CostCentreStack*)ReadSpW(stack_frame_sizeW((StgClosure *)Sp) + 4);
}
#endif
@@ -1101,7 +1200,7 @@ run_BCO:
if (0) { int i;
debugBelch("\n");
for (i = 8; i >= 0; i--) {
- debugBelch("%d %p\n", i, (void *) SpW(i));
+ debugBelch("%d %p\n", i, (void *) ReadSpW(i));
}
debugBelch("\n");
}
@@ -1203,7 +1302,7 @@ run_BCO:
// copy the contents of the top stack frame into the AP_STACK
for (i = 2; i < size_words; i++)
{
- new_aps->payload[i] = (StgClosure *)SpW(i-2);
+ new_aps->payload[i] = (StgClosure *)ReadSpW(i-2);
}
// No write barrier is needed here as this is a new allocation
@@ -1276,7 +1375,7 @@ run_BCO:
case bci_PUSH_L: {
W_ o1 = BCO_GET_LARGE_ARG;
- SpW(-1) = SpW(o1);
+ SpW(-1) = ReadSpW(o1);
Sp_subW(1);
goto nextInsn;
}
@@ -1284,8 +1383,8 @@ run_BCO:
case bci_PUSH_LL: {
W_ o1 = BCO_GET_LARGE_ARG;
W_ o2 = BCO_GET_LARGE_ARG;
- SpW(-1) = SpW(o1);
- SpW(-2) = SpW(o2);
+ SpW(-1) = ReadSpW(o1);
+ SpW(-2) = ReadSpW(o2);
Sp_subW(2);
goto nextInsn;
}
@@ -1294,9 +1393,9 @@ run_BCO:
W_ o1 = BCO_GET_LARGE_ARG;
W_ o2 = BCO_GET_LARGE_ARG;
W_ o3 = BCO_GET_LARGE_ARG;
- SpW(-1) = SpW(o1);
- SpW(-2) = SpW(o2);
- SpW(-3) = SpW(o3);
+ SpW(-1) = ReadSpW(o1);
+ SpW(-2) = ReadSpW(o2);
+ SpW(-3) = ReadSpW(o3);
Sp_subW(3);
goto nextInsn;
}
@@ -1650,7 +1749,7 @@ run_BCO:
* a_1 ... a_n, k
*/
while(n-- > 0) {
- SpW(n+by) = SpW(n);
+ SpW(n+by) = ReadSpW(n);
}
Sp_addW(by);
INTERP_TICK(it_slides);
@@ -1702,9 +1801,9 @@ run_BCO:
StgHalfWord i;
W_ stkoff = BCO_GET_LARGE_ARG;
StgHalfWord n_payload = BCO_GET_LARGE_ARG;
- StgAP* ap = (StgAP*)SpW(stkoff);
+ StgAP* ap = (StgAP*)ReadSpW(stkoff);
ASSERT(ap->n_args == n_payload);
- ap->fun = (StgClosure*)SpW(0);
+ ap->fun = (StgClosure*)ReadSpW(0);
// The function should be a BCO, and its bitmap should
// cover the payload of the AP correctly.
@@ -1712,7 +1811,7 @@ run_BCO:
&& BCO_BITMAP_SIZE(ap->fun) == ap->n_args);
for (i = 0; i < n_payload; i++) {
- ap->payload[i] = (StgClosure*)SpW(i+1);
+ ap->payload[i] = (StgClosure*)ReadSpW(i+1);
}
Sp_addW(n_payload+1);
IF_DEBUG(interpreter,
@@ -1726,9 +1825,9 @@ run_BCO:
StgHalfWord i;
W_ stkoff = BCO_GET_LARGE_ARG;
StgHalfWord n_payload = BCO_GET_LARGE_ARG;
- StgPAP* pap = (StgPAP*)SpW(stkoff);
+ StgPAP* pap = (StgPAP*)ReadSpW(stkoff);
ASSERT(pap->n_args == n_payload);
- pap->fun = (StgClosure*)SpW(0);
+ pap->fun = (StgClosure*)ReadSpW(0);
// The function should be a BCO
if (get_itbl(pap->fun)->type != BCO) {
@@ -1739,7 +1838,7 @@ run_BCO:
}
for (i = 0; i < n_payload; i++) {
- pap->payload[i] = (StgClosure*)SpW(i+1);
+ pap->payload[i] = (StgClosure*)ReadSpW(i+1);
}
Sp_addW(n_payload+1);
IF_DEBUG(interpreter,
@@ -1753,7 +1852,7 @@ run_BCO:
/* Unpack N ptr words from t.o.s constructor */
W_ i;
W_ n_words = BCO_GET_LARGE_ARG;
- StgClosure* con = UNTAG_CLOSURE((StgClosure*)SpW(0));
+ StgClosure* con = UNTAG_CLOSURE((StgClosure*)ReadSpW(0));
Sp_subW(n_words);
for (i = 0; i < n_words; i++) {
SpW(i) = (W_)con->payload[i];
@@ -1777,7 +1876,7 @@ run_BCO:
ASSERT(n_ptrs + n_nptrs > 0);
//ASSERT(n_words > 0); // We shouldn't ever need to allocate nullary constructors
for (W_ i = 0; i < n_words; i++) {
- con->payload[i] = (StgClosure*)SpW(i);
+ con->payload[i] = (StgClosure*)ReadSpW(i);
}
Sp_addW(n_words);
Sp_subW(1);
@@ -1799,7 +1898,7 @@ run_BCO:
case bci_TESTLT_P: {
unsigned int discr = BCO_NEXT;
int failto = BCO_GET_LARGE_ARG;
- StgClosure* con = UNTAG_CLOSURE((StgClosure*)SpW(0));
+ StgClosure* con = UNTAG_CLOSURE((StgClosure*)ReadSpW(0));
if (GET_TAG(con) >= discr) {
bciPtr = failto;
}
@@ -1809,7 +1908,7 @@ run_BCO:
case bci_TESTEQ_P: {
unsigned int discr = BCO_NEXT;
int failto = BCO_GET_LARGE_ARG;
- StgClosure* con = UNTAG_CLOSURE((StgClosure*)SpW(0));
+ StgClosure* con = UNTAG_CLOSURE((StgClosure*)ReadSpW(0));
if (GET_TAG(con) != discr) {
bciPtr = failto;
}
@@ -1819,7 +1918,7 @@ run_BCO:
case bci_TESTLT_I: {
int discr = BCO_GET_LARGE_ARG;
int failto = BCO_GET_LARGE_ARG;
- I_ stackInt = (I_)SpW(0);
+ I_ stackInt = (I_)ReadSpW(0);
if (stackInt >= (I_)BCO_LIT(discr))
bciPtr = failto;
goto nextInsn;
@@ -1864,7 +1963,7 @@ run_BCO:
case bci_TESTEQ_I: {
int discr = BCO_GET_LARGE_ARG;
int failto = BCO_GET_LARGE_ARG;
- I_ stackInt = (I_)SpW(0);
+ I_ stackInt = (I_)ReadSpW(0);
if (stackInt != (I_)BCO_LIT(discr)) {
bciPtr = failto;
}
@@ -1914,7 +2013,7 @@ run_BCO:
case bci_TESTLT_W: {
int discr = BCO_GET_LARGE_ARG;
int failto = BCO_GET_LARGE_ARG;
- W_ stackWord = (W_)SpW(0);
+ W_ stackWord = (W_)ReadSpW(0);
if (stackWord >= (W_)BCO_LIT(discr))
bciPtr = failto;
goto nextInsn;
@@ -1959,7 +2058,7 @@ run_BCO:
case bci_TESTEQ_W: {
int discr = BCO_GET_LARGE_ARG;
int failto = BCO_GET_LARGE_ARG;
- W_ stackWord = (W_)SpW(0);
+ W_ stackWord = (W_)ReadSpW(0);
if (stackWord != (W_)BCO_LIT(discr)) {
bciPtr = failto;
}
@@ -2068,7 +2167,7 @@ run_BCO:
goto eval;
case bci_RETURN_P:
- tagged_obj = (StgClosure *)SpW(0);
+ tagged_obj = (StgClosure *)ReadSpW(0);
Sp_addW(1);
goto do_return_pointer;
@@ -2195,7 +2294,7 @@ run_BCO:
}
// this is the function we're going to call
- fn = (void(*)(void))SpW(ret_size);
+ fn = (void(*)(void))ReadSpW(ret_size);
// Restore the Haskell thread's current value of errno
errno = cap->r.rCurrentTSO->saved_errno;
@@ -2246,7 +2345,7 @@ run_BCO:
// Re-load the pointer to the BCO from the stg_ret_p frame,
// it might have moved during the call. Also reload the
// pointers to the components of the BCO.
- obj = (StgClosure*)SpW(1);
+ obj = (StgClosure*)ReadSpW(1);
// N.B. this is a BCO and therefore is by definition not tagged
bco = (StgBCO*)obj;
instrs = (StgWord16*)(bco->instrs->payload);
=====================================
testsuite/tests/llvm/should_run/T25730.hs
=====================================
@@ -0,0 +1,17 @@
+{-# LANGUAGE MagicHash, UnboxedTuples, ExtendedLiterals, UnliftedFFITypes #-}
+
+module Main where
+
+import GHC.Exts
+import GHC.Int
+
+foreign import ccall unsafe
+ packsi32 :: Int32X4# -> Int32X4# -> Int16X8#
+
+main :: IO ()
+main = do
+ let a = broadcastInt32X4# 100#Int32
+ b = broadcastInt32X4# 200#Int32
+ c = packsi32 a b
+ (# x0, x1, x2, x3, x4, x5, x6, x7 #) = unpackInt16X8# c
+ print (I16# x0, I16# x1, I16# x2, I16# x3, I16# x4, I16# x5, I16# x6, I16# x7)
=====================================
testsuite/tests/llvm/should_run/T25730.stdout
=====================================
@@ -0,0 +1 @@
+(100,100,100,100,200,200,200,200)
=====================================
testsuite/tests/llvm/should_run/T25730C.c
=====================================
@@ -0,0 +1,7 @@
+#include <emmintrin.h>
+#include <stdio.h>
+
+__m128i packsi32(__m128i a, __m128i b)
+{
+ return _mm_packs_epi32(a, b);
+}
=====================================
testsuite/tests/llvm/should_run/all.T
=====================================
@@ -14,3 +14,5 @@ def ignore_llvm_and_vortex( msg ):
test('T22487', [normal, normalise_errmsg_fun(ignore_llvm_and_vortex)], compile_and_run, [''])
test('T22033', [normal, normalise_errmsg_fun(ignore_llvm_and_vortex)], compile_and_run, [''])
+test('T25730', [req_c, unless(arch('x86_64'), skip), normalise_errmsg_fun(ignore_llvm_and_vortex)], compile_and_run, ['T25730C.c'])
+ # T25730C.c contains Intel instrinsics, so only run this test on x86
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/829ddd68a80c34486222077828061ce737fe86af...fbd056513388d47160334947caa4f16a78690cb5
--
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/829ddd68a80c34486222077828061ce737fe86af...fbd056513388d47160334947caa4f16a78690cb5
You're receiving this email because of your account on gitlab.haskell.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-commits/attachments/20250225/670062e4/attachment-0001.html>
More information about the ghc-commits
mailing list