[Git][ghc/ghc][wip/marge_bot_batch_merge_job] 13 commits: hadrian: Don't enable TSAN in stage0 build
Marge Bot (@marge-bot)
gitlab at gitlab.haskell.org
Wed Dec 14 11:52:43 UTC 2022
Marge Bot pushed to branch wip/marge_bot_batch_merge_job at Glasgow Haskell Compiler / GHC
Commits:
6d97e74d by Ben Gamari at 2022-12-14T06:52:19-05:00
hadrian: Don't enable TSAN in stage0 build
- - - - -
e61856a9 by Ben Gamari at 2022-12-14T06:52:19-05:00
cmm: Introduce blockConcat
- - - - -
e64d8236 by Ben Gamari at 2022-12-14T06:52:20-05:00
cmm: Introduce MemoryOrderings
- - - - -
63b9b00d by Ben Gamari at 2022-12-14T06:52:20-05:00
llvm: Respect memory specified orderings
- - - - -
c98d0a57 by Ben Gamari at 2022-12-14T06:52:20-05:00
Codegen/x86: Eliminate barrier for relaxed accesses
- - - - -
ded8cbb8 by Ben Gamari at 2022-12-14T06:52:20-05:00
cmm/Parser: Reduce some repetition
- - - - -
4ec17c2b by Ben Gamari at 2022-12-14T06:52:20-05:00
cmm/Parser: Add syntax for ordered loads and stores
- - - - -
ce29530d by Ben Gamari at 2022-12-14T06:52:20-05:00
cmm/Parser: Atomic load syntax
Originally I had thought I would just use the `prim` call syntax instead
of introducing new syntax for atomic loads. However, it turns out that
`prim` call syntax tends to make things quite unreadable. This new
syntax seems quite natural.
- - - - -
486f16e5 by Ben Gamari at 2022-12-14T06:52:20-05:00
codeGen: Introduce ThreadSanitizer instrumentation
This introduces a new Cmm pass which instruments the program with
ThreadSanitizer annotations, allowing full tracking of mutator memory
accesses via TSAN.
- - - - -
ee6a4265 by Ben Gamari at 2022-12-14T06:52:20-05:00
Hadrian: Drop TSAN_ENABLED define from flavour
This is redundant since the TSANUtils.h already defines it.
- - - - -
ad67af9b by Ben Gamari at 2022-12-14T06:52:20-05:00
hadrian: Enable Cmm instrumentation in TSAN flavour
- - - - -
9c9578a5 by Ben Gamari at 2022-12-14T06:52:20-05:00
rts: Ensure that global regs are never passed as fun call args
This is in general unsafe as they may be clobbered if they are mapped to
caller-saved machine registers. See Note [Register parameter passing].
- - - - -
de68c3dd by Ryan Scott at 2022-12-14T06:52:20-05:00
checkValidInst: Don't expand synonyms when splitting sigma types
Previously, the `checkValidInst` function (used when checking that an instance
declaration is headed by an actual type class, not a type synonym) was using
`tcSplitSigmaTy` to split apart the `forall`s and instance context. This is
incorrect, however, as `tcSplitSigmaTy` expands type synonyms, which can cause
instances headed by quantified constraint type synonyms to be accepted
erroneously.
This patch introduces `splitInstTyForValidity`, a variant of `tcSplitSigmaTy`
specialized for validity checking that does _not_ expand type synonyms, and
uses it in `checkValidInst`.
Fixes #22570.
- - - - -
30 changed files:
- compiler/GHC/Cmm/Config.hs
- compiler/GHC/Cmm/ContFlowOpt.hs
- compiler/GHC/Cmm/Dataflow/Block.hs
- compiler/GHC/Cmm/Lexer.x
- compiler/GHC/Cmm/MachOp.hs
- compiler/GHC/Cmm/Parser.y
- compiler/GHC/Cmm/Pipeline.hs
- + compiler/GHC/Cmm/ThreadSanitizer.hs
- compiler/GHC/CmmToAsm/AArch64/CodeGen.hs
- compiler/GHC/CmmToAsm/PPC/CodeGen.hs
- compiler/GHC/CmmToAsm/Wasm/FromCmm.hs
- compiler/GHC/CmmToAsm/X86/CodeGen.hs
- compiler/GHC/CmmToC.hs
- compiler/GHC/CmmToLlvm/CodeGen.hs
- compiler/GHC/Driver/Config/Cmm.hs
- compiler/GHC/Driver/Flags.hs
- compiler/GHC/Driver/Session.hs
- compiler/GHC/StgToCmm/ExtCode.hs
- compiler/GHC/StgToCmm/Prim.hs
- compiler/GHC/Tc/Validity.hs
- compiler/ghc.cabal.in
- docs/users_guide/debugging.rst
- hadrian/src/Flavour.hs
- rts/HeapStackCheck.cmm
- + rts/TSANUtils.c
- rts/include/rts/TSANUtils.h
- rts/rts.cabal.in
- + testsuite/tests/typecheck/should_fail/T22570.hs
- + testsuite/tests/typecheck/should_fail/T22570.stderr
- testsuite/tests/typecheck/should_fail/all.T
Changes:
=====================================
compiler/GHC/Cmm/Config.hs
=====================================
@@ -19,6 +19,7 @@ data CmmConfig = CmmConfig
, cmmDoLinting :: !Bool -- ^ Do Cmm Linting Optimization or not
, cmmOptElimCommonBlks :: !Bool -- ^ Eliminate common blocks or not
, cmmOptSink :: !Bool -- ^ Perform sink after stack layout or not
+ , cmmOptThreadSanitizer :: !Bool -- ^ Instrument memory accesses for ThreadSanitizer
, cmmGenStackUnwindInstr :: !Bool -- ^ Generate stack unwinding instructions (for debugging)
, cmmExternalDynamicRefs :: !Bool -- ^ Generate code to link against dynamic libraries
, cmmDoCmmSwitchPlans :: !Bool -- ^ Should the Cmm pass replace Stg switch statements
=====================================
compiler/GHC/Cmm/ContFlowOpt.hs
=====================================
@@ -10,7 +10,7 @@ where
import GHC.Prelude hiding (succ, unzip, zip)
-import GHC.Cmm.Dataflow.Block
+import GHC.Cmm.Dataflow.Block hiding (blockConcat)
import GHC.Cmm.Dataflow.Collections
import GHC.Cmm.Dataflow.Graph
import GHC.Cmm.Dataflow.Label
=====================================
compiler/GHC/Cmm/Dataflow/Block.hs
=====================================
@@ -14,6 +14,7 @@ module GHC.Cmm.Dataflow.Block
, IndexedCO
, Block(..)
, blockAppend
+ , blockConcat
, blockCons
, blockFromList
, blockJoin
@@ -136,6 +137,8 @@ blockJoin f b t = BlockCC f b t
blockAppend :: Block n e O -> Block n O x -> Block n e x
blockAppend = cat
+blockConcat :: [Block n O O] -> Block n O O
+blockConcat = foldr blockAppend emptyBlock
-- Taking apart
=====================================
compiler/GHC/Cmm/Lexer.x
=====================================
@@ -94,6 +94,10 @@ $white_no_nl+ ;
"!=" { kw CmmT_Ne }
"&&" { kw CmmT_BoolAnd }
"||" { kw CmmT_BoolOr }
+ "%relaxed" { kw CmmT_Relaxed }
+ "%acquire" { kw CmmT_Acquire }
+ "%release" { kw CmmT_Release }
+ "%seq_cst" { kw CmmT_SeqCst }
"True" { kw CmmT_True }
"False" { kw CmmT_False }
@@ -183,6 +187,10 @@ data CmmToken
| CmmT_False
| CmmT_True
| CmmT_likely
+ | CmmT_Relaxed
+ | CmmT_Acquire
+ | CmmT_Release
+ | CmmT_SeqCst
deriving (Show)
-- -----------------------------------------------------------------------------
=====================================
compiler/GHC/Cmm/MachOp.hs
=====================================
@@ -24,6 +24,7 @@ module GHC.Cmm.MachOp
, machOpMemcpyishAlign
-- Atomic read-modify-write
+ , MemoryOrdering(..)
, AtomicMachOp(..)
)
where
@@ -662,10 +663,12 @@ data CallishMachOp
| MO_BSwap Width
| MO_BRev Width
- -- Atomic read-modify-write.
+ -- | Atomic read-modify-write. Arguments are @[dest, n]@.
| MO_AtomicRMW Width AtomicMachOp
- | MO_AtomicRead Width
- | MO_AtomicWrite Width
+ -- | Atomic read. Arguments are @[addr]@.
+ | MO_AtomicRead Width MemoryOrdering
+ -- | Atomic write. Arguments are @[addr, value]@.
+ | MO_AtomicWrite Width MemoryOrdering
-- | Atomic compare-and-swap. Arguments are @[dest, expected, new]@.
-- Sequentially consistent.
-- Possible future refactoring: should this be an'MO_AtomicRMW' variant?
@@ -680,6 +683,14 @@ data CallishMachOp
| MO_ResumeThread
deriving (Eq, Show)
+-- | C11 memory ordering semantics.
+data MemoryOrdering
+ = MemOrderRelaxed -- ^ relaxed ordering
+ | MemOrderAcquire -- ^ acquire ordering
+ | MemOrderRelease -- ^ release ordering
+ | MemOrderSeqCst -- ^ sequentially consistent
+ deriving (Eq, Ord, Show)
+
-- | The operation to perform atomically.
data AtomicMachOp =
AMO_Add
=====================================
compiler/GHC/Cmm/Parser.y
=====================================
@@ -194,6 +194,27 @@ convention. Note if a field is longer than a word (e.g. a D_ on
a 32-bit machine) then the call will push as many words as
necessary to the stack to accommodate it (e.g. 2).
+Memory ordering
+---------------
+
+Cmm respects the C11 memory model and distinguishes between non-atomic and
+atomic memory accesses. In C11 fashion, atomic accesses can provide a number of
+memory ordering guarantees. These are supported in Cmm syntax as follows:
+
+ W_[ptr] = ...; // a non-atomic store
+ %relaxed W_[ptr] = ...; // an atomic store with relaxed ordering semantics
+ %release W_[ptr] = ...; // an atomic store with release ordering semantics
+
+ x = W_(ptr); // a non-atomic load
+ x = %relaxed W_[ptr]; // an atomic load with relaxed ordering
+ x = %acquire W_[ptr]; // an atomic load with acquire ordering
+ // or equivalently...
+ x = prim %load_acquire64(ptr);
+
+Here we used W_ as an example but these operations can be used on all Cmm
+types.
+
+See Note [Heap memory barriers] in SMP.h for details.
----------------------------------------------------------------------------- -}
@@ -313,6 +334,10 @@ import qualified Data.ByteString.Char8 as BS8
'True' { L _ (CmmT_True ) }
'False' { L _ (CmmT_False) }
'likely'{ L _ (CmmT_likely)}
+ 'relaxed'{ L _ (CmmT_Relaxed)}
+ 'acquire'{ L _ (CmmT_Acquire)}
+ 'release'{ L _ (CmmT_Release)}
+ 'seq_cst'{ L _ (CmmT_SeqCst)}
'CLOSURE' { L _ (CmmT_CLOSURE) }
'INFO_TABLE' { L _ (CmmT_INFO_TABLE) }
@@ -627,8 +652,23 @@ stmt :: { CmmParse () }
| lreg '=' expr ';'
{ do reg <- $1; e <- $3; withSourceNote $2 $4 (emitAssign reg e) }
+
+ -- Use lreg instead of local_reg to avoid ambiguity
+ | lreg '=' mem_ordering type '[' expr ']' ';'
+ { do reg <- $1;
+ let lreg = case reg of
+ { CmmLocal r -> r
+ ; other -> pprPanic "CmmParse:" (ppr reg <> text "not a local register")
+ } ;
+ mord <- $3;
+ let { ty = $4; w = typeWidth ty };
+ e <- $6;
+ let op = MO_AtomicRead w mord;
+ withSourceNote $2 $7 $ code (emitPrimCall [lreg] op [e]) }
+ | mem_ordering type '[' expr ']' '=' expr ';'
+ { do mord <- $1; withSourceNote $3 $8 (doStore (Just mord) $2 $4 $7) }
| type '[' expr ']' '=' expr ';'
- { withSourceNote $2 $7 (doStore $1 $3 $6) }
+ { withSourceNote $2 $7 (doStore Nothing $1 $3 $6) }
-- Gah! We really want to say "foreign_results" but that causes
-- a shift/reduce conflict with assignment. We either
@@ -678,6 +718,14 @@ unwind_regs
| GLOBALREG '=' expr_or_unknown
{ do e <- $3; return [($1, e)] }
+-- | A memory ordering
+mem_ordering :: { CmmParse MemoryOrdering }
+mem_ordering
+ : 'relaxed' { do return MemOrderRelaxed }
+ | 'release' { do return MemOrderRelease }
+ | 'acquire' { do return MemOrderAcquire }
+ | 'seq_cst' { do return MemOrderSeqCst }
+
-- | Used by unwind to indicate unknown unwinding values.
expr_or_unknown
:: { CmmParse (Maybe CmmExpr) }
@@ -953,6 +1001,7 @@ exprMacros profile align_check = listToUFM [
platform = profilePlatform profile
-- we understand a subset of C-- primitives:
+machOps :: UniqFM FastString (Width -> MachOp)
machOps = listToUFM $
map (\(x, y) -> (mkFastString x, y)) [
( "add", MO_Add ),
@@ -1073,37 +1122,32 @@ callishMachOps platform = listToUFM $
( "suspendThread", (MO_SuspendThread,) ),
( "resumeThread", (MO_ResumeThread,) ),
- ("prefetch0", (MO_Prefetch_Data 0,)),
- ("prefetch1", (MO_Prefetch_Data 1,)),
- ("prefetch2", (MO_Prefetch_Data 2,)),
- ("prefetch3", (MO_Prefetch_Data 3,)),
-
- ( "popcnt8", (MO_PopCnt W8,)),
- ( "popcnt16", (MO_PopCnt W16,)),
- ( "popcnt32", (MO_PopCnt W32,)),
- ( "popcnt64", (MO_PopCnt W64,)),
-
- ( "pdep8", (MO_Pdep W8,)),
- ( "pdep16", (MO_Pdep W16,)),
- ( "pdep32", (MO_Pdep W32,)),
- ( "pdep64", (MO_Pdep W64,)),
-
- ( "pext8", (MO_Pext W8,)),
- ( "pext16", (MO_Pext W16,)),
- ( "pext32", (MO_Pext W32,)),
- ( "pext64", (MO_Pext W64,)),
-
- ( "cmpxchg8", (MO_Cmpxchg W8,)),
- ( "cmpxchg16", (MO_Cmpxchg W16,)),
- ( "cmpxchg32", (MO_Cmpxchg W32,)),
- ( "cmpxchg64", (MO_Cmpxchg W64,)),
-
- ( "xchg8", (MO_Xchg W8,)),
- ( "xchg16", (MO_Xchg W16,)),
- ( "xchg32", (MO_Xchg W32,)),
- ( "xchg64", (MO_Xchg W64,))
+ ( "prefetch0", (MO_Prefetch_Data 0,)),
+ ( "prefetch1", (MO_Prefetch_Data 1,)),
+ ( "prefetch2", (MO_Prefetch_Data 2,)),
+ ( "prefetch3", (MO_Prefetch_Data 3,))
+ ] ++ concat
+ [ allWidths "popcnt" MO_PopCnt
+ , allWidths "pdep" MO_Pdep
+ , allWidths "pext" MO_Pext
+ , allWidths "cmpxchg" MO_Cmpxchg
+ , allWidths "xchg" MO_Xchg
+ , allWidths "load_relaxed" (\w -> MO_AtomicRead w MemOrderAcquire)
+ , allWidths "load_acquire" (\w -> MO_AtomicRead w MemOrderAcquire)
+ , allWidths "load_seqcst" (\w -> MO_AtomicRead w MemOrderSeqCst)
+ , allWidths "store_release" (\w -> MO_AtomicWrite w MemOrderRelease)
+ , allWidths "store_seqcst" (\w -> MO_AtomicWrite w MemOrderSeqCst)
]
where
+ allWidths
+ :: String
+ -> (Width -> CallishMachOp)
+ -> [(FastString, a -> (CallishMachOp, a))]
+ allWidths name f =
+ [ (mkFastString $ name ++ show (widthInBits w), (f w,))
+ | w <- [W8, W16, W32, W64]
+ ]
+
memcpyLikeTweakArgs :: (Int -> CallishMachOp) -> [CmmExpr] -> (CallishMachOp, [CmmExpr])
memcpyLikeTweakArgs op [] = pgmError "memcpy-like function requires at least one argument"
memcpyLikeTweakArgs op args@(_:_) =
@@ -1347,8 +1391,12 @@ primCall results_code name args_code
let (p, args') = f args
code (emitPrimCall (map fst results) p args')
-doStore :: CmmType -> CmmParse CmmExpr -> CmmParse CmmExpr -> CmmParse ()
-doStore rep addr_code val_code
+doStore :: Maybe MemoryOrdering
+ -> CmmType
+ -> CmmParse CmmExpr -- ^ address
+ -> CmmParse CmmExpr -- ^ value
+ -> CmmParse ()
+doStore mem_ord rep addr_code val_code
= do platform <- getPlatform
addr <- addr_code
val <- val_code
@@ -1362,7 +1410,7 @@ doStore rep addr_code val_code
let coerce_val
| val_width /= rep_width = CmmMachOp (MO_UU_Conv val_width rep_width) [val]
| otherwise = val
- emitStore addr coerce_val
+ emitStore mem_ord addr coerce_val
-- -----------------------------------------------------------------------------
-- If-then-else and boolean expressions
=====================================
compiler/GHC/Cmm/Pipeline.hs
=====================================
@@ -19,6 +19,7 @@ import GHC.Cmm.LayoutStack
import GHC.Cmm.ProcPoint
import GHC.Cmm.Sink
import GHC.Cmm.Switch.Implement
+import GHC.Cmm.ThreadSanitizer
import GHC.Types.Unique.Supply
@@ -98,6 +99,13 @@ cpsTop logger platform cfg proc =
else pure g
dump Opt_D_dump_cmm_switch "Post switch plan" g
+ ----------- ThreadSanitizer instrumentation -----------------------------
+ g <- {-# SCC "annotateTSAN" #-}
+ if cmmOptThreadSanitizer cfg
+ then runUniqSM $ annotateTSAN platform g
+ else return g
+ dump Opt_D_dump_cmm_thread_sanitizer "ThreadSanitizer instrumentation" g
+
----------- Proc points -------------------------------------------------
let
call_pps :: ProcPointSet -- LabelMap
=====================================
compiler/GHC/Cmm/ThreadSanitizer.hs
=====================================
@@ -0,0 +1,285 @@
+{-# LANGUAGE GADTs #-}
+{-# LANGUAGE RankNTypes #-}
+
+-- | Annotate a CmmGraph with ThreadSanitizer instrumentation calls.
+module GHC.Cmm.ThreadSanitizer (annotateTSAN) where
+
+import GHC.Prelude
+
+import GHC.StgToCmm.Utils (get_GlobalReg_addr)
+import GHC.Platform
+import GHC.Platform.Regs (activeStgRegs, callerSaves)
+import GHC.Cmm
+import GHC.Cmm.Utils
+import GHC.Cmm.CLabel
+import GHC.Cmm.Dataflow
+import GHC.Cmm.Dataflow.Block
+import GHC.Cmm.Dataflow.Graph
+import GHC.Data.FastString
+import GHC.Types.Basic
+import GHC.Types.ForeignCall
+import GHC.Types.Unique
+import GHC.Types.Unique.Supply
+
+import Data.Maybe (fromMaybe)
+
+data Env = Env { platform :: Platform
+ , uniques :: [Unique]
+ }
+
+annotateTSAN :: Platform -> CmmGraph -> UniqSM CmmGraph
+annotateTSAN platform graph = do
+ env <- Env platform <$> getUniquesM
+ return $ modifyGraph (mapGraphBlocks (annotateBlock env)) graph
+
+mapBlockList :: (forall e' x'. n e' x' -> Block n e' x')
+ -> Block n e x -> Block n e x
+mapBlockList f (BlockCO n rest ) = f n `blockAppend` mapBlockList f rest
+mapBlockList f (BlockCC n rest m) = f n `blockAppend` mapBlockList f rest `blockAppend` f m
+mapBlockList f (BlockOC rest m) = mapBlockList f rest `blockAppend` f m
+mapBlockList _ BNil = BNil
+mapBlockList f (BMiddle blk) = f blk
+mapBlockList f (BCat a b) = mapBlockList f a `blockAppend` mapBlockList f b
+mapBlockList f (BSnoc a n) = mapBlockList f a `blockAppend` f n
+mapBlockList f (BCons n a) = f n `blockAppend` mapBlockList f a
+
+annotateBlock :: Env -> Block CmmNode e x -> Block CmmNode e x
+annotateBlock env = mapBlockList (annotateNode env)
+
+annotateNode :: Env -> CmmNode e x -> Block CmmNode e x
+annotateNode env node =
+ case node of
+ CmmEntry{} -> BlockCO node BNil
+ CmmComment{} -> BMiddle node
+ CmmTick{} -> BMiddle node
+ CmmUnwind{} -> BMiddle node
+ CmmAssign{} -> annotateNodeOO env node
+ CmmStore lhs rhs align ->
+ let ty = cmmExprType (platform env) rhs
+ rhs_nodes = annotateLoads env (collectExprLoads rhs)
+ lhs_nodes = annotateLoads env (collectExprLoads lhs)
+ st = tsanStore env align ty lhs
+ in rhs_nodes `blockAppend` lhs_nodes `blockAppend` st `blockSnoc` node
+ CmmUnsafeForeignCall (PrimTarget op) formals args ->
+ let node' = fromMaybe (BMiddle node) (annotatePrim env op formals args)
+ arg_nodes = blockConcat $ map (annotateExpr env) args
+ in arg_nodes `blockAppend` node'
+ CmmUnsafeForeignCall{} -> annotateNodeOO env node
+ CmmBranch{} -> annotateNodeOC env node
+ CmmCondBranch{} -> annotateNodeOC env node
+ CmmSwitch{} -> annotateNodeOC env node
+ CmmCall{} -> annotateNodeOC env node
+ CmmForeignCall{} -> annotateNodeOC env node
+
+annotateNodeOO :: Env -> CmmNode O O -> Block CmmNode O O
+annotateNodeOO env node =
+ annotateLoads env (collectLoadsNode node) `blockSnoc` node
+
+annotateNodeOC :: Env -> CmmNode O C -> Block CmmNode O C
+annotateNodeOC env node =
+ annotateLoads env (collectLoadsNode node) `blockJoinTail` node
+
+annotateExpr :: Env -> CmmExpr -> Block CmmNode O O
+annotateExpr env expr =
+ annotateLoads env (collectExprLoads expr)
+
+data Load = Load CmmType AlignmentSpec CmmExpr
+
+annotateLoads :: Env -> [Load] -> Block CmmNode O O
+annotateLoads env loads =
+ blockConcat
+ [ tsanLoad env align ty addr
+ | Load ty align addr <- loads
+ ]
+
+collectLoadsNode :: CmmNode e x -> [Load]
+collectLoadsNode node =
+ foldExp (\exp rest -> collectExprLoads exp ++ rest) node []
+
+-- | Collect all of the memory locations loaded from by a 'CmmExpr'.
+collectExprLoads :: CmmExpr -> [Load]
+collectExprLoads (CmmLit _) = []
+collectExprLoads (CmmLoad e ty align) = [Load ty align e]
+collectExprLoads (CmmReg _) = []
+collectExprLoads (CmmMachOp _op args) = foldMap collectExprLoads args
+collectExprLoads (CmmStackSlot _ _) = []
+collectExprLoads (CmmRegOff _ _) = []
+
+-- | Generate TSAN instrumentation for a 'CallishMachOp' occurrence.
+annotatePrim :: Env
+ -> CallishMachOp -- ^ the applied operation
+ -> [CmmFormal] -- ^ results
+ -> [CmmActual] -- ^ arguments
+ -> Maybe (Block CmmNode O O)
+ -- ^ 'Just' a block of instrumentation, if applicable
+annotatePrim env (MO_AtomicRMW w aop) [dest] [addr, val] = Just $ tsanAtomicRMW env MemOrderSeqCst aop w addr val dest
+annotatePrim env (MO_AtomicRead w mord) [dest] [addr] = Just $ tsanAtomicLoad env mord w addr dest
+annotatePrim env (MO_AtomicWrite w mord) [] [addr, val] = Just $ tsanAtomicStore env mord w val addr
+annotatePrim env (MO_Xchg w) [dest] [addr, val] = Just $ tsanAtomicExchange env MemOrderSeqCst w val addr dest
+annotatePrim env (MO_Cmpxchg w) [dest] [addr, expected, new]
+ = Just $ tsanAtomicCas env MemOrderSeqCst MemOrderSeqCst w addr expected new dest
+annotatePrim _ _ _ _ = Nothing
+
+mkUnsafeCall :: Env
+ -> ForeignTarget -- ^ function
+ -> [CmmFormal] -- ^ results
+ -> [CmmActual] -- ^ arguments
+ -> Block CmmNode O O
+mkUnsafeCall env ftgt formals args =
+ save `blockAppend` -- save global registers
+ bind_args `blockSnoc` -- bind arguments to local registers
+ call `blockAppend` -- perform call
+ restore -- restore global registers
+ where
+ -- We are rather conservative here and just save/restore all GlobalRegs.
+ (save, restore) = saveRestoreCallerRegs (platform env)
+
+ -- We also must be careful not to mention caller-saved registers in
+ -- arguments as Cmm-Lint checks this. To accomplish this we instead bind
+ -- the arguments to local registers.
+ arg_regs :: [CmmReg]
+ arg_regs = zipWith arg_reg (uniques env) args
+ where
+ arg_reg :: Unique -> CmmExpr -> CmmReg
+ arg_reg u expr = CmmLocal $ LocalReg u (cmmExprType (platform env) expr)
+
+ bind_args :: Block CmmNode O O
+ bind_args = blockConcat $ zipWith (\r e -> BMiddle $ CmmAssign r e) arg_regs args
+
+ call = CmmUnsafeForeignCall ftgt formals (map CmmReg arg_regs)
+
+saveRestoreCallerRegs :: Platform
+ -> (Block CmmNode O O, Block CmmNode O O)
+saveRestoreCallerRegs platform =
+ (save, restore)
+ where
+ regs = filter (callerSaves platform) (activeStgRegs platform)
+
+ save = blockFromList (map saveReg regs)
+ saveReg reg =
+ CmmStore (get_GlobalReg_addr platform reg)
+ (CmmReg (CmmGlobal reg))
+ NaturallyAligned
+
+ restore = blockFromList (map restoreReg regs)
+ restoreReg reg =
+ CmmAssign (CmmGlobal reg)
+ (CmmLoad (get_GlobalReg_addr platform reg)
+ (globalRegType platform reg)
+ NaturallyAligned)
+
+-- | Mirrors __tsan_memory_order
+-- <https://github.com/llvm-mirror/compiler-rt/blob/master/include/sanitizer/tsan_interface_atomic.h#L32>
+memoryOrderToTsanMemoryOrder :: Env -> MemoryOrdering -> CmmExpr
+memoryOrderToTsanMemoryOrder env mord =
+ mkIntExpr (platform env) n
+ where
+ n = case mord of
+ MemOrderRelaxed -> 0
+ MemOrderAcquire -> 2
+ MemOrderRelease -> 3
+ MemOrderSeqCst -> 5
+
+tsanTarget :: FastString -- ^ function name
+ -> [ForeignHint] -- ^ formals
+ -> [ForeignHint] -- ^ arguments
+ -> ForeignTarget
+tsanTarget fn formals args =
+ ForeignTarget (CmmLit (CmmLabel lbl)) conv
+ where
+ conv = ForeignConvention CCallConv args formals CmmMayReturn
+ lbl = mkForeignLabel fn Nothing ForeignLabelInExternalPackage IsFunction
+
+tsanStore :: Env
+ -> AlignmentSpec -> CmmType -> CmmExpr
+ -> Block CmmNode O O
+tsanStore env align ty addr =
+ mkUnsafeCall env ftarget [] [addr]
+ where
+ ftarget = tsanTarget fn [] [AddrHint]
+ w = widthInBytes (typeWidth ty)
+ fn = case align of
+ Unaligned
+ | w > 1 -> fsLit $ "__tsan_unaligned_write" ++ show w
+ _ -> fsLit $ "__tsan_write" ++ show w
+
+tsanLoad :: Env
+ -> AlignmentSpec -> CmmType -> CmmExpr
+ -> Block CmmNode O O
+tsanLoad env align ty addr =
+ mkUnsafeCall env ftarget [] [addr]
+ where
+ ftarget = tsanTarget fn [] [AddrHint]
+ w = widthInBytes (typeWidth ty)
+ fn = case align of
+ Unaligned
+ | w > 1 -> fsLit $ "__tsan_unaligned_read" ++ show w
+ _ -> fsLit $ "__tsan_read" ++ show w
+
+tsanAtomicStore :: Env
+ -> MemoryOrdering -> Width -> CmmExpr -> CmmExpr
+ -> Block CmmNode O O
+tsanAtomicStore env mord w val addr =
+ mkUnsafeCall env ftarget [] [addr, val, mord']
+ where
+ mord' = memoryOrderToTsanMemoryOrder env mord
+ ftarget = tsanTarget fn [] [AddrHint, NoHint, NoHint]
+ fn = fsLit $ "__tsan_atomic" ++ show (widthInBits w) ++ "_store"
+
+tsanAtomicLoad :: Env
+ -> MemoryOrdering -> Width -> CmmExpr -> LocalReg
+ -> Block CmmNode O O
+tsanAtomicLoad env mord w addr dest =
+ mkUnsafeCall env ftarget [dest] [addr, mord']
+ where
+ mord' = memoryOrderToTsanMemoryOrder env mord
+ ftarget = tsanTarget fn [NoHint] [AddrHint, NoHint]
+ fn = fsLit $ "__tsan_atomic" ++ show (widthInBits w) ++ "_load"
+
+tsanAtomicExchange :: Env
+ -> MemoryOrdering -> Width -> CmmExpr -> CmmExpr -> LocalReg
+ -> Block CmmNode O O
+tsanAtomicExchange env mord w val addr dest =
+ mkUnsafeCall env ftarget [dest] [addr, val, mord']
+ where
+ mord' = memoryOrderToTsanMemoryOrder env mord
+ ftarget = tsanTarget fn [NoHint] [AddrHint, NoHint, NoHint]
+ fn = fsLit $ "__tsan_atomic" ++ show (widthInBits w) ++ "_exchange"
+
+-- N.B. C11 CAS returns a boolean (to avoid the ABA problem) whereas Cmm's CAS
+-- returns the expected value. We use define a shim in the RTS to provide
+-- Cmm's semantics using the TSAN C11 primitive.
+tsanAtomicCas :: Env
+ -> MemoryOrdering -- ^ success ordering
+ -> MemoryOrdering -- ^ failure ordering
+ -> Width
+ -> CmmExpr -- ^ address
+ -> CmmExpr -- ^ expected value
+ -> CmmExpr -- ^ new value
+ -> LocalReg -- ^ result destination
+ -> Block CmmNode O O
+tsanAtomicCas env mord_success mord_failure w addr expected new dest =
+ mkUnsafeCall env ftarget [dest] [addr, expected, new, mord_success', mord_failure']
+ where
+ mord_success' = memoryOrderToTsanMemoryOrder env mord_success
+ mord_failure' = memoryOrderToTsanMemoryOrder env mord_failure
+ ftarget = tsanTarget fn [NoHint] [AddrHint, NoHint, NoHint, NoHint, NoHint]
+ fn = fsLit $ "ghc_tsan_atomic" ++ show (widthInBits w) ++ "_compare_exchange"
+
+tsanAtomicRMW :: Env
+ -> MemoryOrdering -> AtomicMachOp -> Width -> CmmExpr -> CmmExpr -> LocalReg
+ -> Block CmmNode O O
+tsanAtomicRMW env mord op w addr val dest =
+ mkUnsafeCall env ftarget [dest] [addr, val, mord']
+ where
+ mord' = memoryOrderToTsanMemoryOrder env mord
+ ftarget = tsanTarget fn [NoHint] [AddrHint, NoHint, NoHint]
+ op' = case op of
+ AMO_Add -> "fetch_add"
+ AMO_Sub -> "fetch_sub"
+ AMO_And -> "fetch_and"
+ AMO_Nand -> "fetch_nand"
+ AMO_Or -> "fetch_or"
+ AMO_Xor -> "fetch_xor"
+ fn = fsLit $ "__tsan_atomic" ++ show (widthInBits w) ++ "_" ++ op'
+
=====================================
compiler/GHC/CmmToAsm/AArch64/CodeGen.hs
=====================================
@@ -1533,8 +1533,8 @@ genCCall target dest_regs arg_regs bid = do
-- -- Atomic read-modify-write.
MO_AtomicRMW w amop -> mkCCall (atomicRMWLabel w amop)
- MO_AtomicRead w -> mkCCall (atomicReadLabel w)
- MO_AtomicWrite w -> mkCCall (atomicWriteLabel w)
+ MO_AtomicRead w _ -> mkCCall (atomicReadLabel w)
+ MO_AtomicWrite w _ -> mkCCall (atomicWriteLabel w)
MO_Cmpxchg w -> mkCCall (cmpxchgLabel w)
-- -- Should be an AtomicRMW variant eventually.
-- -- Sequential consistent.
=====================================
compiler/GHC/CmmToAsm/PPC/CodeGen.hs
=====================================
@@ -1173,7 +1173,7 @@ genCCall (PrimTarget (MO_AtomicRMW width amop)) [dst] [addr, n]
(n_reg, n_code) <- getSomeReg n
return (op dst dst (RIReg n_reg), n_code)
-genCCall (PrimTarget (MO_AtomicRead width)) [dst] [addr]
+genCCall (PrimTarget (MO_AtomicRead width _)) [dst] [addr]
= do let fmt = intFormat width
reg_dst = getLocalRegReg dst
form = if widthInBits width == 64 then DS else D
@@ -1200,7 +1200,7 @@ genCCall (PrimTarget (MO_AtomicRead width)) [dst] [addr]
-- This is also what gcc does.
-genCCall (PrimTarget (MO_AtomicWrite width)) [] [addr, val] = do
+genCCall (PrimTarget (MO_AtomicWrite width _)) [] [addr, val] = do
code <- assignMem_IntCode (intFormat width) addr val
return $ unitOL HWSYNC `appOL` code
@@ -2067,8 +2067,8 @@ genCCall' config gcp target dest_regs args
MO_AtomicRMW {} -> unsupported
MO_Cmpxchg w -> (cmpxchgLabel w, False)
MO_Xchg w -> (xchgLabel w, False)
- MO_AtomicRead _ -> unsupported
- MO_AtomicWrite _ -> unsupported
+ MO_AtomicRead _ _ -> unsupported
+ MO_AtomicWrite _ _ -> unsupported
MO_S_Mul2 {} -> unsupported
MO_S_QuotRem {} -> unsupported
=====================================
compiler/GHC/CmmToAsm/Wasm/FromCmm.hs
=====================================
@@ -1304,7 +1304,7 @@ lower_CallishMachOp lbl (MO_AtomicRMW w0 op) rs xs =
CmmMayReturn
rs
xs
-lower_CallishMachOp lbl (MO_AtomicRead w0) [reg] [ptr] = do
+lower_CallishMachOp lbl (MO_AtomicRead w0 _) [reg] [ptr] = do
SomeWasmExpr ty (WasmExpr ret_instr) <-
lower_CmmLoad
lbl
@@ -1313,7 +1313,7 @@ lower_CallishMachOp lbl (MO_AtomicRead w0) [reg] [ptr] = do
NaturallyAligned
ri <- onCmmLocalReg_Typed ty reg
pure $ WasmStatements $ ret_instr `WasmConcat` WasmLocalSet ty ri
-lower_CallishMachOp lbl (MO_AtomicWrite _) [] [ptr, val] =
+lower_CallishMachOp lbl (MO_AtomicWrite _ _) [] [ptr, val] =
lower_CmmStore lbl ptr val NaturallyAligned
lower_CallishMachOp lbl (MO_Cmpxchg w0) rs xs = lower_MO_Cmpxchg lbl w0 rs xs
lower_CallishMachOp lbl (MO_Xchg w0) rs xs =
=====================================
compiler/GHC/CmmToAsm/X86/CodeGen.hs
=====================================
@@ -2203,8 +2203,8 @@ genSimplePrim bid (MO_Pdep width) [dst] [src,mask] = genPdep bid widt
genSimplePrim bid (MO_Pext width) [dst] [src,mask] = genPext bid width dst src mask
genSimplePrim bid (MO_Clz width) [dst] [src] = genClz bid width dst src
genSimplePrim bid (MO_UF_Conv width) [dst] [src] = genWordToFloat bid width dst src
-genSimplePrim _ (MO_AtomicRead w) [dst] [addr] = genAtomicRead w dst addr
-genSimplePrim _ (MO_AtomicWrite w) [] [addr,val] = genAtomicWrite w addr val
+genSimplePrim _ (MO_AtomicRead w mo) [dst] [addr] = genAtomicRead w mo dst addr
+genSimplePrim _ (MO_AtomicWrite w mo) [] [addr,val] = genAtomicWrite w mo addr val
genSimplePrim bid (MO_Cmpxchg width) [dst] [addr,old,new] = genCmpXchg bid width dst addr old new
genSimplePrim _ (MO_Xchg width) [dst] [addr, value] = genXchg width dst addr value
genSimplePrim _ (MO_AddWordC w) [r,c] [x,y] = genAddSubRetCarry w ADD_CC (const Nothing) CARRY r c x y
@@ -3962,15 +3962,20 @@ genWordToFloat bid width dst src =
-- TODO: generate assembly instead
genPrimCCall bid (word2FloatLabel width) [dst] [src]
-genAtomicRead :: Width -> LocalReg -> CmmExpr -> NatM InstrBlock
-genAtomicRead width dst addr = do
+genAtomicRead :: Width -> MemoryOrdering -> LocalReg -> CmmExpr -> NatM InstrBlock
+genAtomicRead width _mord dst addr = do
load_code <- intLoadCode (MOV (intFormat width)) addr
return (load_code (getLocalRegReg dst))
-genAtomicWrite :: Width -> CmmExpr -> CmmExpr -> NatM InstrBlock
-genAtomicWrite width addr val = do
+genAtomicWrite :: Width -> MemoryOrdering -> CmmExpr -> CmmExpr -> NatM InstrBlock
+genAtomicWrite width mord addr val = do
code <- assignMem_IntCode (intFormat width) addr val
- return $ code `snocOL` MFENCE
+ let needs_fence = case mord of
+ MemOrderSeqCst -> True
+ MemOrderRelease -> True
+ MemOrderAcquire -> pprPanic "genAtomicWrite: acquire ordering on write" empty
+ MemOrderRelaxed -> False
+ return $ if needs_fence then code `snocOL` MFENCE else code
genCmpXchg
:: BlockId
=====================================
compiler/GHC/CmmToC.hs
=====================================
@@ -944,8 +944,9 @@ pprCallishMachOp_for_C mop
MO_AtomicRMW w amop -> ftext (atomicRMWLabel w amop)
MO_Cmpxchg w -> ftext (cmpxchgLabel w)
MO_Xchg w -> ftext (xchgLabel w)
- MO_AtomicRead w -> ftext (atomicReadLabel w)
- MO_AtomicWrite w -> ftext (atomicWriteLabel w)
+ -- TODO: handle orderings
+ MO_AtomicRead w _ -> ftext (atomicReadLabel w)
+ MO_AtomicWrite w _ -> ftext (atomicWriteLabel w)
MO_UF_Conv w -> ftext (word2FloatLabel w)
MO_S_Mul2 {} -> unsupported
=====================================
compiler/GHC/CmmToLlvm/CodeGen.hs
=====================================
@@ -45,7 +45,7 @@ import qualified Data.Semigroup as Semigroup
import Data.List ( nub )
import Data.Maybe ( catMaybes )
-type Atomic = Bool
+type Atomic = Maybe MemoryOrdering
type LlvmStatements = OrdList LlvmStatement
data Signage = Signed | Unsigned deriving (Eq, Show)
@@ -265,9 +265,9 @@ genCall (PrimTarget (MO_AtomicRMW width amop)) [dst] [addr, n] = runStmtsDecls $
retVar <- doExprW targetTy $ AtomicRMW op ptrVar nVar SyncSeqCst
statement $ Store retVar dstVar Nothing
-genCall (PrimTarget (MO_AtomicRead _)) [dst] [addr] = runStmtsDecls $ do
+genCall (PrimTarget (MO_AtomicRead _ mem_ord)) [dst] [addr] = runStmtsDecls $ do
dstV <- getCmmRegW (CmmLocal dst)
- v1 <- genLoadW True addr (localRegType dst) NaturallyAligned
+ v1 <- genLoadW (Just mem_ord) addr (localRegType dst) NaturallyAligned
statement $ Store v1 dstV Nothing
genCall (PrimTarget (MO_Cmpxchg _width))
@@ -294,13 +294,14 @@ genCall (PrimTarget (MO_Xchg _width)) [dst] [addr, val] = runStmtsDecls $ do
resVar <- doExprW (getVarType valVar) (AtomicRMW LAO_Xchg ptrVar valVar SyncSeqCst)
statement $ Store resVar dstV Nothing
-genCall (PrimTarget (MO_AtomicWrite _width)) [] [addr, val] = runStmtsDecls $ do
+genCall (PrimTarget (MO_AtomicWrite _width mem_ord)) [] [addr, val] = runStmtsDecls $ do
addrVar <- exprToVarW addr
valVar <- exprToVarW val
let ptrTy = pLift $ getVarType valVar
ptrExpr = Cast LM_Inttoptr addrVar ptrTy
ptrVar <- doExprW ptrTy ptrExpr
- statement $ Expr $ AtomicRMW LAO_Xchg ptrVar valVar SyncSeqCst
+ let ordering = convertMemoryOrdering mem_ord
+ statement $ Expr $ AtomicRMW LAO_Xchg ptrVar valVar ordering
-- Handle memcpy function specifically since llvm's intrinsic version takes
-- some extra parameters.
@@ -1012,11 +1013,11 @@ cmmPrimOpFunctions mop = do
MO_Touch -> unsupported
MO_UF_Conv _ -> unsupported
- MO_AtomicRead _ -> unsupported
- MO_AtomicRMW _ _ -> unsupported
- MO_AtomicWrite _ -> unsupported
- MO_Cmpxchg _ -> unsupported
- MO_Xchg _ -> unsupported
+ MO_AtomicRead _ _ -> unsupported
+ MO_AtomicRMW _ _ -> unsupported
+ MO_AtomicWrite _ _ -> unsupported
+ MO_Cmpxchg _ -> unsupported
+ MO_Xchg _ -> unsupported
MO_I64_ToI -> dontReach64
MO_I64_FromI -> dontReach64
@@ -1368,7 +1369,7 @@ exprToVarOpt opt e = case e of
-> genLit opt lit
CmmLoad e' ty align
- -> genLoad False e' ty align
+ -> genLoad Nothing e' ty align
-- Cmmreg in expression is the value, so must load. If you want actual
-- reg pointer, call getCmmReg directly.
@@ -1890,7 +1891,8 @@ case we will need a more granular way of specifying alignment.
mkLoad :: Atomic -> LlvmVar -> AlignmentSpec -> LlvmExpression
mkLoad atomic vptr alignment
- | atomic = ALoad SyncSeqCst False vptr
+ | Just mem_ord <- atomic
+ = ALoad (convertMemoryOrdering mem_ord) False vptr
| otherwise = Load vptr align
where
ty = pLower (getVarType vptr)
@@ -2027,6 +2029,12 @@ genLit _ CmmHighStackMark
-- * Misc
--
+convertMemoryOrdering :: MemoryOrdering -> LlvmSyncOrdering
+convertMemoryOrdering MemOrderRelaxed = SyncUnord
+convertMemoryOrdering MemOrderAcquire = SyncAcquire
+convertMemoryOrdering MemOrderRelease = SyncRelease
+convertMemoryOrdering MemOrderSeqCst = SyncSeqCst
+
-- | Find CmmRegs that get assigned and allocate them on the stack
--
-- Any register that gets written needs to be allocated on the
=====================================
compiler/GHC/Driver/Config/Cmm.hs
=====================================
@@ -18,6 +18,7 @@ initCmmConfig dflags = CmmConfig
, cmmDoLinting = gopt Opt_DoCmmLinting dflags
, cmmOptElimCommonBlks = gopt Opt_CmmElimCommonBlocks dflags
, cmmOptSink = gopt Opt_CmmSink dflags
+ , cmmOptThreadSanitizer = gopt Opt_CmmThreadSanitizer dflags
, cmmGenStackUnwindInstr = debugLevel dflags > 0
, cmmExternalDynamicRefs = gopt Opt_ExternalDynamicRefs dflags
, cmmDoCmmSwitchPlans = not (backendHasNativeSwitch (backend dflags))
=====================================
compiler/GHC/Driver/Flags.hs
=====================================
@@ -66,6 +66,7 @@ data DumpFlag
| Opt_D_dump_cmm_split
| Opt_D_dump_cmm_info
| Opt_D_dump_cmm_cps
+ | Opt_D_dump_cmm_thread_sanitizer
-- end cmm subflags
| Opt_D_dump_cfg_weights -- ^ Dump the cfg used for block layout.
| Opt_D_dump_asm
@@ -354,6 +355,7 @@ data GeneralFlag
| Opt_Ticky_Dyn_Thunk
| Opt_Ticky_Tag
| Opt_Ticky_AP -- ^ Use regular thunks even when we could use std ap thunks in order to get entry counts
+ | Opt_CmmThreadSanitizer
| Opt_RPath
| Opt_RelativeDynlibPaths
| Opt_CompactUnwind -- ^ @-fcompact-unwind@
=====================================
compiler/GHC/Driver/Session.hs
=====================================
@@ -2434,6 +2434,8 @@ dynamic_flags_deps = [
(setDumpFlag Opt_D_dump_cmm_cps)
, make_ord_flag defGhcFlag "ddump-cmm-opt"
(setDumpFlag Opt_D_dump_opt_cmm)
+ , make_ord_flag defGhcFlag "ddump-cmm-thread-sanitizer"
+ (setDumpFlag Opt_D_dump_cmm_thread_sanitizer)
, make_ord_flag defGhcFlag "ddump-cfg-weights"
(setDumpFlag Opt_D_dump_cfg_weights)
, make_ord_flag defGhcFlag "ddump-core-stats"
@@ -3511,8 +3513,8 @@ fFlagsDeps = [
unless (platformOS (targetPlatform dflags) == OSDarwin && turn_on)
(addWarn "-compact-unwind is only implemented by the darwin platform. Ignoring.")
return dflags)),
- flagSpec "show-error-context" Opt_ShowErrorContext
-
+ flagSpec "show-error-context" Opt_ShowErrorContext,
+ flagSpec "cmm-thread-sanitizer" Opt_CmmThreadSanitizer
]
++ fHoleFlags
=====================================
compiler/GHC/StgToCmm/ExtCode.hs
=====================================
@@ -231,8 +231,12 @@ emitLabel = code . F.emitLabel
emitAssign :: CmmReg -> CmmExpr -> CmmParse ()
emitAssign l r = code (F.emitAssign l r)
-emitStore :: CmmExpr -> CmmExpr -> CmmParse ()
-emitStore l r = code (F.emitStore l r)
+emitStore :: Maybe MemoryOrdering -> CmmExpr -> CmmExpr -> CmmParse ()
+emitStore (Just mem_ord) l r = do
+ platform <- getPlatform
+ let w = typeWidth $ cmmExprType platform r
+ emit $ mkUnsafeCall (PrimTarget $ MO_AtomicWrite w mem_ord) [] [l,r]
+emitStore Nothing l r = code (F.emitStore l r)
getCode :: CmmParse a -> CmmParse CmmAGraph
getCode (EC ec) = EC $ \c e s -> do
=====================================
compiler/GHC/StgToCmm/Prim.hs
=====================================
@@ -3011,7 +3011,7 @@ doAtomicReadAddr
doAtomicReadAddr res addr ty =
emitPrimCall
[ res ]
- (MO_AtomicRead (typeWidth ty))
+ (MO_AtomicRead (typeWidth ty) MemOrderSeqCst)
[ addr ]
-- | Emit an atomic write to a byte array that acts as a memory barrier.
@@ -3039,7 +3039,7 @@ doAtomicWriteAddr
doAtomicWriteAddr addr ty val =
emitPrimCall
[ {- no results -} ]
- (MO_AtomicWrite (typeWidth ty))
+ (MO_AtomicWrite (typeWidth ty) MemOrderSeqCst)
[ addr, val ]
doCasByteArray
=====================================
compiler/GHC/Tc/Validity.hs
=====================================
@@ -61,7 +61,7 @@ import GHC.Types.Basic ( UnboxedTupleOrSum(..), unboxedTupleOrSumExtension )
import GHC.Types.Name
import GHC.Types.Var.Env
import GHC.Types.Var.Set
-import GHC.Types.Var ( VarBndr(..), mkTyVar )
+import GHC.Types.Var ( VarBndr(..), isInvisibleFunArg, mkTyVar )
import GHC.Utils.FV
import GHC.Utils.Error
import GHC.Driver.Session
@@ -1731,6 +1731,13 @@ the instance head, we'll expand the synonym on fly, and it'll look like
instance (%,%) (Show Int, Show Int)
and we /really/ don't want that. So we carefully do /not/ expand
synonyms, by matching on TyConApp directly.
+
+For similar reasons, we do not use tcSplitSigmaTy when decomposing the instance
+context, as the looks through type synonyms. If we looked through type
+synonyms, then it could be possible to write an instance for a type synonym
+involving a quantified constraint (see #22570). Instead, we define
+splitInstTyForValidity, a specialized version of tcSplitSigmaTy (local to
+GHC.Tc.Validity) that does not expand type synonyms.
-}
checkValidInstance :: UserTypeCtxt -> LHsSigType GhcRn -> Type -> TcM ()
@@ -1774,11 +1781,31 @@ checkValidInstance ctxt hs_type ty = case tau of
; return () }
_ -> failWithTc (TcRnNoClassInstHead tau)
where
- (_tvs, theta, tau) = tcSplitSigmaTy ty
+ (theta, tau) = splitInstTyForValidity ty
-- The location of the "head" of the instance
head_loc = getLoc (getLHsInstDeclHead hs_type)
+-- | Split an instance type of the form @forall tvbs. inst_ctxt => inst_head@
+-- and return @(inst_ctxt, inst_head)@. This function makes no attempt to look
+-- through type synonyms. See @Note [Instances and constraint synonyms]@.
+splitInstTyForValidity :: Type -> (ThetaType, Type)
+splitInstTyForValidity = split_context [] . drop_foralls
+ where
+ -- This is like 'dropForAlls', except that it does not look through type
+ -- synonyms.
+ drop_foralls :: Type -> Type
+ drop_foralls (ForAllTy (Bndr _tv argf) ty)
+ | isInvisibleForAllTyFlag argf = drop_foralls ty
+ drop_foralls ty = ty
+
+ -- This is like 'tcSplitPhiTy', except that it does not look through type
+ -- synonyms.
+ split_context :: ThetaType -> Type -> (ThetaType, Type)
+ split_context preds (FunTy { ft_af = af, ft_arg = pred, ft_res = tau })
+ | isInvisibleFunArg af = split_context (pred:preds) tau
+ split_context preds ty = (reverse preds, ty)
+
{-
Note [Paterson conditions]
~~~~~~~~~~~~~~~~~~~~~~~~~~
=====================================
compiler/ghc.cabal.in
=====================================
@@ -214,6 +214,7 @@ Library
GHC.Cmm.Sink
GHC.Cmm.Switch
GHC.Cmm.Switch.Implement
+ GHC.Cmm.ThreadSanitizer
GHC.CmmToAsm
GHC.Cmm.LRegSet
GHC.CmmToAsm.AArch64
=====================================
docs/users_guide/debugging.rst
=====================================
@@ -551,6 +551,13 @@ These flags dump various phases of GHC's C-\\- pipeline.
Dump the results of the C-\\- control flow optimisation pass.
+.. ghc-flag:: -ddump-cmm-thread-sanitizer
+ :shortdesc: Dump the results of the C-\\- ThreadSanitizer elaboration pass.
+ :type: dynamic
+
+ Dump the results of the C-\\- pass responsible for adding instrumentation
+ added by :ghc-flag:`-fcmm-thread-sanitizer`.
+
.. ghc-flag:: -ddump-cmm-cbe
:shortdesc: Dump the results of common block elimination
:type: dynamic
@@ -1075,6 +1082,15 @@ Checking for consistency
Note that this is only intended to be used as a debugging measure, not as
the primary means of catching out-of-bounds accesses.
+.. ghc-flag:: -fcmm-thread-sanitizer
+ :shortdesc: Enable ThreadSanitizer instrumentation of memory accesses.
+ :type: dynamic
+
+ This enables generation of `ThreadSanitizer
+ <https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual>`
+ instrumentation of memory accesses. Requires use of ``-fsanitize=thread``
+ or similar when compiling and linking.
+
.. _checking-determinism:
Checking for determinism
=====================================
hadrian/src/Flavour.hs
=====================================
@@ -203,12 +203,14 @@ splitSections = splitSectionsIf (/=ghc)
-- Disable section splitting for the GHC library. It takes too long and
-- there is little benefit.
+-- | Build GHC and libraries with ThreadSanitizer support. You likely want to
+-- configure with @--disable-large-address-space@ when using this.
enableThreadSanitizer :: Flavour -> Flavour
-enableThreadSanitizer = addArgs $ mconcat
- [ builder (Ghc CompileHs) ? arg "-optc-fsanitize=thread"
- , builder (Ghc CompileCWithGhc) ? (arg "-optc-fsanitize=thread" <> arg "-DTSAN_ENABLED")
- , builder (Ghc LinkHs) ? arg "-optl-fsanitize=thread"
- , builder (Cc CompileC) ? (arg "-fsanitize=thread" <> arg "-DTSAN_ENABLED")
+enableThreadSanitizer = addArgs $ notStage0 ? mconcat
+ [ builder (Ghc CompileHs) ? (arg "-optc-fsanitize=thread" <> arg "-fcmm-thread-sanitizer")
+ , builder (Ghc CompileCWithGhc) ? arg "-optc-fsanitize=thread"
+ , builder (Ghc LinkHs) ? (arg "-optc-fsanitize=thread" <> arg "-optl-fsanitize=thread")
+ , builder Cc ? arg "-fsanitize=thread"
, builder (Cabal Flags) ? arg "thread-sanitizer"
, builder Testsuite ? arg "--config=have_thread_sanitizer=True"
]
=====================================
rts/HeapStackCheck.cmm
=====================================
@@ -569,7 +569,7 @@ stg_block_takemvar_finally
W_ r1, r3;
r1 = R1;
r3 = R3;
- unlockClosure(R3, stg_MVAR_DIRTY_info);
+ unlockClosure(r3, stg_MVAR_DIRTY_info);
R1 = r1;
R3 = r3;
jump StgReturn [R1];
@@ -597,7 +597,7 @@ stg_block_readmvar_finally
W_ r1, r3;
r1 = R1;
r3 = R3;
- unlockClosure(R3, stg_MVAR_DIRTY_info);
+ unlockClosure(r3, stg_MVAR_DIRTY_info);
R1 = r1;
R3 = r3;
jump StgReturn [R1];
@@ -625,7 +625,7 @@ stg_block_putmvar_finally
W_ r1, r3;
r1 = R1;
r3 = R3;
- unlockClosure(R3, stg_MVAR_DIRTY_info);
+ unlockClosure(r3, stg_MVAR_DIRTY_info);
R1 = r1;
R3 = r3;
jump StgReturn [R1];
=====================================
rts/TSANUtils.c
=====================================
@@ -0,0 +1,37 @@
+#include <Rts.h>
+
+#if defined(TSAN_ENABLED)
+
+uint64_t ghc_tsan_atomic64_compare_exchange(uint64_t *ptr, uint64_t expected, uint64_t new_value, int success_memorder, int failure_memorder)
+{
+ __tsan_atomic64_compare_exchange_strong(
+ ptr, &expected, new_value,
+ success_memorder, failure_memorder);
+ return expected;
+}
+
+uint32_t ghc_tsan_atomic32_compare_exchange(uint32_t *ptr, uint32_t expected, uint32_t new_value, int success_memorder, int failure_memorder)
+{
+ __tsan_atomic32_compare_exchange_strong(
+ ptr, &expected, new_value,
+ success_memorder, failure_memorder);
+ return expected;
+}
+
+uint16_t ghc_tsan_atomic16_compare_exchange(uint16_t *ptr, uint16_t expected, uint16_t new_value, int success_memorder, int failure_memorder)
+{
+ __tsan_atomic16_compare_exchange_strong(
+ ptr, &expected, new_value,
+ success_memorder, failure_memorder);
+ return expected;
+}
+
+uint8_t ghc_tsan_atomic8_compare_exchange(uint8_t *ptr, uint8_t expected, uint8_t new_value, int success_memorder, int failure_memorder)
+{
+ __tsan_atomic8_compare_exchange_strong(
+ ptr, &expected, new_value,
+ success_memorder, failure_memorder);
+ return expected;
+}
+
+#endif
=====================================
rts/include/rts/TSANUtils.h
=====================================
@@ -65,3 +65,10 @@ void AnnotateBenignRaceSized(const char *file,
#define TSAN_ANNOTATE_BENIGN_RACE(addr,desc) \
TSAN_ANNOTATE_BENIGN_RACE_SIZED((void*)(addr), sizeof(*addr), desc)
+
+
+uint64_t ghc_tsan_atomic64_compare_exchange(uint64_t *ptr, uint64_t expected, uint64_t new_value, int success_memorder, int failure_memorder);
+uint32_t ghc_tsan_atomic32_compare_exchange(uint32_t *ptr, uint32_t expected, uint32_t new_value, int success_memorder, int failure_memorder);
+uint16_t ghc_tsan_atomic16_compare_exchange(uint16_t *ptr, uint16_t expected, uint16_t new_value, int success_memorder, int failure_memorder);
+uint8_t ghc_tsan_atomic8_compare_exchange(uint8_t *ptr, uint8_t expected, uint8_t new_value, int success_memorder, int failure_memorder);
+
=====================================
rts/rts.cabal.in
=====================================
@@ -588,6 +588,7 @@ library
Trace.c
TraverseHeap.c
TraverseHeapTest.c
+ TSANUtils.c
WSDeque.c
Weak.c
eventlog/EventLog.c
=====================================
testsuite/tests/typecheck/should_fail/T22570.hs
=====================================
@@ -0,0 +1,15 @@
+{-# LANGUAGE QuantifiedConstraints #-}
+module T22570 where
+
+import Data.Kind
+
+class SomeClass a
+class OtherClass
+
+type SomeClassUnit = OtherClass => SomeClass () :: Constraint
+
+instance SomeClassUnit
+
+type SomeClassSyn a = OtherClass => SomeClass a :: Constraint
+
+instance SomeClassSyn ()
=====================================
testsuite/tests/typecheck/should_fail/T22570.stderr
=====================================
@@ -0,0 +1,10 @@
+
+T22570.hs:11:10: error: [GHC-53946]
+ • Illegal instance for a type synonym
+ A class instance must be for a class
+ • In the instance declaration for ‘SomeClassUnit’
+
+T22570.hs:15:10: error: [GHC-53946]
+ • Illegal instance for a type synonym
+ A class instance must be for a class
+ • In the instance declaration for ‘SomeClassSyn ()’
=====================================
testsuite/tests/typecheck/should_fail/all.T
=====================================
@@ -665,3 +665,4 @@ test('MissingDefaultMethodBinding', normal, compile_fail, [''])
test('T21447', normal, compile_fail, [''])
test('T21530a', normal, compile_fail, [''])
test('T21530b', normal, compile_fail, [''])
+test('T22570', normal, compile_fail, [''])
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/490b79cb641e8927a2341b63153926cc64482919...de68c3dd38d8aca5e0b44d48545a9e101a2e9fd3
--
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/490b79cb641e8927a2341b63153926cc64482919...de68c3dd38d8aca5e0b44d48545a9e101a2e9fd3
You're receiving this email because of your account on gitlab.haskell.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-commits/attachments/20221214/6b291dbd/attachment-0001.html>
More information about the ghc-commits
mailing list