[Git][ghc/ghc][wip/marge_bot_batch_merge_job] 5 commits: Fix numa auto configure
Marge Bot (@marge-bot)
gitlab at gitlab.haskell.org
Wed Sep 13 20:41:35 UTC 2023
Marge Bot pushed to branch wip/marge_bot_batch_merge_job at Glasgow Haskell Compiler / GHC
Commits:
9217950b by Finley McIlwaine at 2023-09-13T08:06:03-04:00
Fix numa auto configure
- - - - -
98e7c1cf by Simon Peyton Jones at 2023-09-13T08:06:40-04:00
Add -fno-cse to T15426 and T18964
This -fno-cse change is to avoid these performance tests depending on
flukey CSE stuff. Each contains several independent tests, and we don't
want them to interact.
See #23925.
By killing CSE we expect a 400% increase in T15426, and 100% in T18964.
Metric Increase:
T15426
T18964
- - - - -
236a134e by Simon Peyton Jones at 2023-09-13T08:06:40-04:00
Tiny refactor
canEtaReduceToArity was only called internally, and always with
two arguments equal to zero. This patch just specialises the
function, and renames it to cantEtaReduceFun.
No change in behaviour.
- - - - -
538abd37 by Ben Gamari at 2023-09-13T16:41:26-04:00
spec-constr: Lift argument limit for SPEC-marked functions
When the user adds a SPEC argument to a function, they are informing us
that they expect the function to be specialised. However, previously
this instruction could be preempted by the specialised-argument limit
(sc_max_args). Fix this.
This fixes #14003.
- - - - -
b56e0cd3 by Simon Peyton Jones at 2023-09-13T16:41:27-04:00
Fix eta reduction
Issue #23922 showed that GHC was bogusly eta-reducing a join point.
We should never eta-reduce (\x -> j x) to j, if j is a join point.
It is extremly difficult to trigger this bug. It took me 45 mins of
trying to make a small tests case, here immortalised as T23922a.
- - - - -
9 changed files:
- compiler/GHC/Core/Opt/Arity.hs
- compiler/GHC/Core/Opt/SpecConstr.hs
- m4/fp_find_libnuma.m4
- testsuite/tests/perf/should_run/T15426.hs
- testsuite/tests/perf/should_run/T18964.hs
- + testsuite/tests/simplCore/should_compile/T14003.hs
- + testsuite/tests/simplCore/should_compile/T14003.stderr
- + testsuite/tests/simplCore/should_compile/T23922a.hs
- testsuite/tests/simplCore/should_compile/all.T
Changes:
=====================================
compiler/GHC/Core/Opt/Arity.hs
=====================================
@@ -87,6 +87,8 @@ import GHC.Utils.Outputable
import GHC.Utils.Panic
import GHC.Utils.Misc
+import Data.Maybe( isJust )
+
{-
************************************************************************
* *
@@ -2326,18 +2328,6 @@ This test is made by `ok_fun` in tryEtaReduce.
* `/\a. \x. f @(Maybe a) x --> /\a. f @(Maybe a)`
See Note [Do not eta reduce PAPs] for why we insist on a trivial head.
-2. Type and dictionary abstraction. Regardless of whether 'f' is a value, it
- is always sound to reduce /type lambdas/, thus:
- (/\a -> f a) --> f
- Moreover, we always want to, because it makes RULEs apply more often:
- This RULE: `forall g. foldr (build (/\a -> g a))`
- should match `foldr (build (/\b -> ...something complex...))`
- and the simplest way to do so is eta-reduce `/\a -> g a` in the RULE to `g`.
-
- The type checker can insert these eta-expanded versions,
- with both type and dictionary lambdas; hence the slightly
- ad-hoc (all ok_lam bndrs)
-
Of course, eta reduction is not always sound. See Note [Eta reduction soundness]
for when it is.
@@ -2376,7 +2366,7 @@ perform eta reduction on an expression with n leading lambdas `\xs. e xs`
(checked in 'is_eta_reduction_sound' in 'tryEtaReduce', which focuses on the
case where `e` is trivial):
- A. It is sound to eta-reduce n arguments as long as n does not exceed the
+(A) It is sound to eta-reduce n arguments as long as n does not exceed the
`exprArity` of `e`. (Needs Arity analysis.)
This criterion exploits information about how `e` is *defined*.
@@ -2385,7 +2375,7 @@ case where `e` is trivial):
By contrast, it would be *unsound* to eta-reduce 2 args, `\x y. e x y` to `e`:
`e 42` diverges when `(\x y. e x y) 42` does not.
- S. It is sound to eta-reduce n arguments in an evaluation context in which all
+(S) It is sound to eta-reduce n arguments in an evaluation context in which all
calls happen with at least n arguments. (Needs Strictness analysis.)
NB: This treats evaluations like a call with 0 args.
NB: This criterion exploits information about how `e` is *used*.
@@ -2412,23 +2402,42 @@ case where `e` is trivial):
See Note [Eta reduction based on evaluation context] for the implementation
details. This criterion is tested extensively in T21261.
- R. Note [Eta reduction in recursive RHSs] tells us that we should not
+(R) Note [Eta reduction in recursive RHSs] tells us that we should not
eta-reduce `f` in its own RHS and describes our fix.
There we have `f = \x. f x` and we should not eta-reduce to `f=f`. Which
might change a terminating program (think @f `seq` e@) to a non-terminating
one.
- E. (See fun_arity in tryEtaReduce.) As a perhaps special case on the
+(E) (See fun_arity in tryEtaReduce.) As a perhaps special case on the
boundary of (A) and (S), when we know that a fun binder `f` is in
WHNF, we simply assume it has arity 1 and apply (A). Example:
g f = f `seq` \x. f x
Here it's sound eta-reduce `\x. f x` to `f`, because `f` can't be bottom
after the `seq`. This turned up in #7542.
+ T. If the binders are all type arguments, it's always safe to eta-reduce,
+ regardless of the arity of f.
+ /\a b. f @a @b --> f
+
+2. Type and dictionary abstraction. Regardless of whether 'f' is a value, it
+ is always sound to reduce /type lambdas/, thus:
+ (/\a -> f a) --> f
+ Moreover, we always want to, because it makes RULEs apply more often:
+ This RULE: `forall g. foldr (build (/\a -> g a))`
+ should match `foldr (build (/\b -> ...something complex...))`
+ and the simplest way to do so is eta-reduce `/\a -> g a` in the RULE to `g`.
+
+ More debatably, we extend this to dictionary arguments too, because the type
+ checker can insert these eta-expanded versions, with both type and dictionary
+ lambdas; hence the slightly ad-hoc (all ok_lam bndrs). That is, we eta-reduce
+ \(d::Num a). f d --> f
+ regardless of f's arity. Its not clear whether or not this is important, and
+ it is not in general sound. But that's the way it is right now.
+
And here are a few more technical criteria for when it is *not* sound to
eta-reduce that are specific to Core and GHC:
- L. With linear types, eta-reduction can break type-checking:
+(L) With linear types, eta-reduction can break type-checking:
f :: A ⊸ B
g :: A -> B
g = \x. f x
@@ -2436,13 +2445,13 @@ eta-reduce that are specific to Core and GHC:
complain that g and f don't have the same type. NB: Not unsound in the
dynamic semantics, but unsound according to the static semantics of Core.
- J. We may not undersaturate join points.
+(J) We may not undersaturate join points.
See Note [Invariants on join points] in GHC.Core, and #20599.
- B. We may not undersaturate functions with no binding.
+(B) We may not undersaturate functions with no binding.
See Note [Eta expanding primops].
- W. We may not undersaturate StrictWorkerIds.
+(W) We may not undersaturate StrictWorkerIds.
See Note [CBV Function Ids] in GHC.Types.Id.Info.
Here is a list of historic accidents surrounding unsound eta-reduction:
@@ -2686,20 +2695,25 @@ tryEtaReduce rec_ids bndrs body eval_sd
ok_fun (App fun (Type {})) = ok_fun fun
ok_fun (Cast fun _) = ok_fun fun
ok_fun (Tick _ expr) = ok_fun expr
- ok_fun (Var fun_id) = is_eta_reduction_sound fun_id || all ok_lam bndrs
+ ok_fun (Var fun_id) = is_eta_reduction_sound fun_id
ok_fun _fun = False
---------------
-- See Note [Eta reduction soundness], this is THE place to check soundness!
- is_eta_reduction_sound fun =
- -- Don't eta-reduce in fun in its own recursive RHSs
- not (fun `elemUnVarSet` rec_ids) -- criterion (R)
- -- Check that eta-reduction won't make the program stricter...
- && (fun_arity fun >= incoming_arity -- criterion (A) and (E)
- || all_calls_with_arity incoming_arity) -- criterion (S)
- -- ... and that the function can be eta reduced to arity 0
- -- without violating invariants of Core and GHC
- && canEtaReduceToArity fun 0 0 -- criteria (L), (J), (W), (B)
+ is_eta_reduction_sound fun
+ | fun `elemUnVarSet` rec_ids -- Criterion (R)
+ = False -- Don't eta-reduce in fun in its own recursive RHSs
+
+ | cantEtaReduceFun fun -- Criteria (L), (J), (W), (B)
+ = False -- Function can't be eta reduced to arity 0
+ -- without violating invariants of Core and GHC
+
+ | otherwise
+ = -- Check that eta-reduction won't make the program stricter...
+ fun_arity fun >= incoming_arity -- Criterion (A) and (E)
+ || all_calls_with_arity incoming_arity -- Criterion (S)
+ || all ok_lam bndrs -- Criterion (T)
+
all_calls_with_arity n = isStrict (fst $ peelManyCalls n eval_sd)
-- See Note [Eta reduction based on evaluation context]
@@ -2754,19 +2768,18 @@ tryEtaReduce rec_ids bndrs body eval_sd
ok_arg _ _ _ _ = Nothing
--- | Can we eta-reduce the given function to the specified arity?
+-- | Can we eta-reduce the given function
-- See Note [Eta reduction soundness], criteria (B), (J), (W) and (L).
-canEtaReduceToArity :: Id -> JoinArity -> Arity -> Bool
-canEtaReduceToArity fun dest_join_arity dest_arity =
- not $
- hasNoBinding fun -- (B)
+cantEtaReduceFun :: Id -> Bool
+cantEtaReduceFun fun
+ = hasNoBinding fun -- (B)
-- Don't undersaturate functions with no binding.
- || ( isJoinId fun && dest_join_arity < idJoinArity fun ) -- (J)
+ || isJoinId fun -- (J)
-- Don't undersaturate join points.
-- See Note [Invariants on join points] in GHC.Core, and #20599
- || ( dest_arity < idCbvMarkArity fun ) -- (W)
+ || (isJust (idCbvMarks_maybe fun)) -- (W)
-- Don't undersaturate StrictWorkerIds.
-- See Note [CBV Function Ids] in GHC.Types.Id.Info.
=====================================
compiler/GHC/Core/Opt/SpecConstr.hs
=====================================
@@ -519,14 +519,17 @@ This is all quite ugly; we ought to come up with a better design.
ForceSpecConstr arguments are spotted in scExpr' and scTopBinds which then set
sc_force to True when calling specLoop. This flag does four things:
- * Ignore specConstrThreshold, to specialise functions of arbitrary size
+(FS1) Ignore specConstrThreshold, to specialise functions of arbitrary size
(see scTopBind)
- * Ignore specConstrCount, to make arbitrary numbers of specialisations
+(FS2) Ignore specConstrCount, to make arbitrary numbers of specialisations
(see specialise)
- * Specialise even for arguments that are not scrutinised in the loop
+(FS3) Specialise even for arguments that are not scrutinised in the loop
(see argToPat; #4448)
- * Only specialise on recursive types a finite number of times
- (see is_too_recursive; #5550; Note [Limit recursive specialisation])
+(FS4) Only specialise on recursive types a finite number of times
+ (see sc_recursive; #5550; Note [Limit recursive specialisation])
+(FS5) Lift the restriction on the maximum number of arguments which
+ the optimisation will specialise.
+ (see `too_many_worker_args` in `callsToNewPats`; #14003)
The flag holds only for specialising a single binding group, and NOT
for nested bindings. (So really it should be passed around explicitly
@@ -1403,7 +1406,7 @@ scBind top_lvl env (NonRec bndr rhs) do_body
scBind top_lvl env (Rec prs) do_body
| isTopLevel top_lvl
, Just threshold <- sc_size (sc_opts env)
- , not force_spec
+ , not force_spec -- See Note [Forcing specialisation], point (FS1)
, not (all (couldBeSmallEnoughToInline (sc_uf_opts (sc_opts env)) threshold) rhss)
= -- Do no specialisation if the RHSs are too big
-- ToDo: I'm honestly not sure of the rationale of this size-testing, nor
@@ -1773,6 +1776,7 @@ specRec env body_calls rhs_infos
, sc_force env || isNothing (sc_count opts)
-- If both of these are false, the sc_count
-- threshold will prevent non-termination
+ -- See Note [Forcing specialisation], point (FS4) and (FS2)
, any ((> the_limit) . si_n_specs) spec_infos
= -- Give up on specialisation, but don't forget to include the rhs_usg
-- for the unspecialised function, since it may now be called
@@ -2399,8 +2403,11 @@ callsToNewPats env fn spec_info@(SI { si_specs = done_specs }) bndr_occs calls
non_dups = nubBy samePat new_pats
-- Remove ones that have too many worker variables
- small_pats = filterOut too_big non_dups
- too_big (CP { cp_qvars = vars, cp_args = args })
+ small_pats = filterOut too_many_worker_args non_dups
+
+ too_many_worker_args _
+ | sc_force env = False -- See (FS5) of Note [Forcing specialisation]
+ too_many_worker_args (CP { cp_qvars = vars, cp_args = args })
= not (isWorkerSmallEnough (sc_max_args $ sc_opts env) (valArgCount args) vars)
-- We are about to construct w/w pair in 'spec_one'.
-- Omit specialisation leading to high arity workers.
@@ -2693,6 +2700,7 @@ argToPat1 env in_scope val_env arg arg_occ _arg_str
-- In that case it counts as "interesting"
argToPat1 env in_scope val_env (Var v) arg_occ arg_str
| sc_force env || specialisableArgOcc arg_occ -- (a)
+ -- See Note [Forcing specialisation], point (FS3)
, is_value -- (b)
-- Ignoring sc_keen here to avoid gratuitously incurring Note [Reboxing]
-- So sc_keen focused just on f (I# x), where we have freshly-allocated
=====================================
m4/fp_find_libnuma.m4
=====================================
@@ -30,7 +30,7 @@ AC_DEFUN([FP_FIND_LIBNUMA],
[Enable NUMA memory policy and thread affinity support in the
runtime system via numactl's libnuma [default=auto]])])
- if test "$enable_numa" = "yes" ; then
+ if test "$enable_numa" != "no" ; then
CFLAGS2="$CFLAGS"
CFLAGS="$LIBNUMA_CFLAGS $CFLAGS"
LDFLAGS2="$LDFLAGS"
@@ -41,7 +41,7 @@ AC_DEFUN([FP_FIND_LIBNUMA],
if test "$ac_cv_header_numa_h$ac_cv_header_numaif_h" = "yesyes" ; then
AC_CHECK_LIB(numa, numa_available,HaveLibNuma=1)
fi
- if test "$HaveLibNuma" = "0" ; then
+ if test "$enable_numa:$HaveLibNuma" = "yes:0" ; then
AC_MSG_ERROR([Cannot find system libnuma (required by --enable-numa)])
fi
=====================================
testsuite/tests/perf/should_run/T15426.hs
=====================================
@@ -1,3 +1,8 @@
+{-# OPTIONS_GHC -fno-cse #-}
+ -- Avoid depending on flukey CSE; there are really 5 independent
+ -- tests in this module, and we don't want them to interact.
+ -- See #23925
+
import Control.Exception (evaluate)
import qualified Data.List as L
@@ -28,4 +33,4 @@ As a result these lists are now floated out and shared.
Just leaving breadcrumbs, in case we later see big perf changes on
this (slightly fragile) benchmark.
--}
\ No newline at end of file
+-}
=====================================
testsuite/tests/perf/should_run/T18964.hs
=====================================
@@ -1,3 +1,8 @@
+{-# OPTIONS_GHC -fno-cse #-}
+ -- Avoid depending on flukey CSE; there are really 4 independent
+ -- tests in this module, and we don't want them to interact.
+ -- See #23925
+
import GHC.Exts
import Data.Int
=====================================
testsuite/tests/simplCore/should_compile/T14003.hs
=====================================
@@ -0,0 +1,30 @@
+{-# OPTIONS_GHC -fspec-constr -fmax-worker-args=2 #-}
+
+-- | Ensure that functions with SPEC arguments are constructor-specialised
+-- even if their argument count exceeds -fmax-worker-args.
+module T14003 (pat1, pat2, pat3, pat4) where
+
+import GHC.Exts
+
+hi :: SPEC
+ -> Maybe Int
+ -> Maybe Int
+ -> Maybe Int
+ -> Int
+hi SPEC (Just x) (Just y) (Just z) = x+y+z
+hi SPEC (Just x) _ _ = hi SPEC (Just x) (Just 42) Nothing
+hi SPEC Nothing _ _ = 42
+
+pat1 :: Int -> Int
+pat1 n = hi SPEC (Just n) (Just 4) (Just 0)
+
+pat2 :: Int -> Int
+pat2 n = hi SPEC Nothing (Just n) Nothing
+
+pat3 :: Int -> Int
+pat3 n = hi SPEC Nothing Nothing (Just n)
+
+pat4 :: Int -> Int
+pat4 n = hi SPEC Nothing (Just n) (Just n)
+
+
=====================================
testsuite/tests/simplCore/should_compile/T14003.stderr
=====================================
@@ -0,0 +1,349 @@
+
+==================== SpecConstr ====================
+Result size of SpecConstr
+ = {terms: 179, types: 124, coercions: 0, joins: 0/0}
+
+-- RHS size: {terms: 1, types: 0, coercions: 0, joins: 0/0}
+$trModule_sF4 :: Addr#
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=IF_ARGS [] 20 0}]
+$trModule_sF4 = "main"#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+$trModule_sF5 :: GHC.Types.TrName
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=IF_ARGS [] 10 10}]
+$trModule_sF5 = GHC.Types.TrNameS $trModule_sF4
+
+-- RHS size: {terms: 1, types: 0, coercions: 0, joins: 0/0}
+$trModule_sF6 :: Addr#
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=IF_ARGS [] 30 0}]
+$trModule_sF6 = "T14003"#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+$trModule_sF7 :: GHC.Types.TrName
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=IF_ARGS [] 10 10}]
+$trModule_sF7 = GHC.Types.TrNameS $trModule_sF6
+
+-- RHS size: {terms: 3, types: 0, coercions: 0, joins: 0/0}
+T14003.$trModule :: GHC.Types.Module
+[LclIdX,
+ Unf=Unf{Src=<vanilla>, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=IF_ARGS [] 10 10}]
+T14003.$trModule = GHC.Types.Module $trModule_sF5 $trModule_sF7
+
+-- RHS size: {terms: 1, types: 0, coercions: 0, joins: 0/0}
+lvl_sFY :: Addr#
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=IF_ARGS [] 100 0}]
+lvl_sFY = "T14003.hs:(14,1)-(16,39)|function hi"#
+
+-- RHS size: {terms: 2, types: 2, coercions: 0, joins: 0/0}
+lvl_sFp :: ()
+[LclId,
+ Str=b,
+ Cpr=b,
+ Unf=Unf{Src=<vanilla>, TopLvl=True,
+ Value=False, ConLike=False, WorkFree=False, Expandable=False,
+ Guidance=NEVER}]
+lvl_sFp = Control.Exception.Base.patError @LiftedRep @() lvl_sFY
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl_sFm :: Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=IF_ARGS [] 10 10}]
+lvl_sFm = GHC.Types.I# 42#
+
+-- RHS size: {terms: 2, types: 1, coercions: 0, joins: 0/0}
+lvl_sFn :: Maybe Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=IF_ARGS [] 10 10}]
+lvl_sFn = GHC.Maybe.Just @Int lvl_sFm
+
+Rec {
+-- RHS size: {terms: 8, types: 4, coercions: 0, joins: 0/0}
+$s$whi_sGi :: Int# -> Int -> Int#
+[LclId[StrictWorker([])], Arity=2, Str=<L><L>]
+$s$whi_sGi
+ = \ (sc_sGf :: Int#) (sc_sGe :: Int) ->
+ $whi_sFB
+ GHC.Types.SPEC
+ (GHC.Maybe.Just @Int sc_sGe)
+ lvl_sFn
+ (GHC.Maybe.Nothing @Int)
+
+-- RHS size: {terms: 11, types: 5, coercions: 0, joins: 0/0}
+$s$whi_sGa :: Int# -> Int# -> Int -> Int#
+[LclId[StrictWorker([])], Arity=3, Str=<L><L><L>]
+$s$whi_sGa
+ = \ (sc_sG5 :: Int#) (sc_sG4 :: Int#) (sc_sG3 :: Int) ->
+ case sc_sG3 of { I# x_aFe -> +# (+# x_aFe sc_sG4) sc_sG5 }
+
+-- RHS size: {terms: 2, types: 1, coercions: 0, joins: 0/0}
+$s$whi_sGb :: Int -> Int#
+[LclId[StrictWorker([])], Arity=1, Str=<L>]
+$s$whi_sGb = \ (sc_sG6 :: Int) -> 42#
+
+-- RHS size: {terms: 2, types: 1, coercions: 0, joins: 0/0}
+$s$whi_sGc :: Int -> Int#
+[LclId[StrictWorker([])], Arity=1, Str=<L>]
+$s$whi_sGc = \ (sc_sG7 :: Int) -> 42#
+
+-- RHS size: {terms: 3, types: 2, coercions: 0, joins: 0/0}
+$s$whi_sGd :: Int -> Int -> Int#
+[LclId[StrictWorker([])], Arity=2, Str=<L><L>]
+$s$whi_sGd = \ (sc_sG9 :: Int) (sc_sG8 :: Int) -> 42#
+
+-- RHS size: {terms: 47, types: 26, coercions: 0, joins: 0/0}
+$whi_sFB [InlPrag=[2], Occ=LoopBreaker]
+ :: SPEC -> Maybe Int -> Maybe Int -> Maybe Int -> Int#
+[LclId[StrictWorker([])],
+ Arity=4,
+ Str=<SL><SL><L><L>,
+ Unf=Unf{Src=<vanilla>, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=IF_ARGS [30 30 80 62] 212 0},
+ RULES: "SC:$whi4" [2]
+ forall (sc_sGf :: Int#) (sc_sGe :: Int).
+ $whi_sFB GHC.Types.SPEC
+ (GHC.Maybe.Just @Int sc_sGe)
+ (GHC.Maybe.Just @Int (GHC.Types.I# sc_sGf))
+ (GHC.Maybe.Nothing @Int)
+ = $s$whi_sGi sc_sGf sc_sGe
+ "SC:$whi0" [2]
+ forall (sc_sG5 :: Int#) (sc_sG4 :: Int#) (sc_sG3 :: Int).
+ $whi_sFB GHC.Types.SPEC
+ (GHC.Maybe.Just @Int sc_sG3)
+ (GHC.Maybe.Just @Int (GHC.Types.I# sc_sG4))
+ (GHC.Maybe.Just @Int (GHC.Types.I# sc_sG5))
+ = $s$whi_sGa sc_sG5 sc_sG4 sc_sG3
+ "SC:$whi1" [2]
+ forall (sc_sG6 :: Int).
+ $whi_sFB GHC.Types.SPEC
+ (GHC.Maybe.Nothing @Int)
+ (GHC.Maybe.Just @Int sc_sG6)
+ (GHC.Maybe.Nothing @Int)
+ = $s$whi_sGb sc_sG6
+ "SC:$whi2" [2]
+ forall (sc_sG7 :: Int).
+ $whi_sFB GHC.Types.SPEC
+ (GHC.Maybe.Nothing @Int)
+ (GHC.Maybe.Nothing @Int)
+ (GHC.Maybe.Just @Int sc_sG7)
+ = $s$whi_sGc sc_sG7
+ "SC:$whi3" [2]
+ forall (sc_sG9 :: Int) (sc_sG8 :: Int).
+ $whi_sFB GHC.Types.SPEC
+ (GHC.Maybe.Nothing @Int)
+ (GHC.Maybe.Just @Int sc_sG8)
+ (GHC.Maybe.Just @Int sc_sG9)
+ = $s$whi_sGd sc_sG9 sc_sG8]
+$whi_sFB
+ = \ (ds_sFv [Dmd=SL] :: SPEC)
+ (ds_sFw [Dmd=SL] :: Maybe Int)
+ (ds_sFx :: Maybe Int)
+ (ds_sFy :: Maybe Int) ->
+ case ds_sFv of {
+ SPEC ->
+ case ds_sFw of wild_X2 [Dmd=A] {
+ Nothing -> 42#;
+ Just x_ayD [Dmd=S] ->
+ case ds_sFx of {
+ Nothing ->
+ $whi_sFB GHC.Types.SPEC wild_X2 lvl_sFn (GHC.Maybe.Nothing @Int);
+ Just y_ayE [Dmd=S!P(S)] ->
+ case ds_sFy of {
+ Nothing ->
+ $whi_sFB GHC.Types.SPEC wild_X2 lvl_sFn (GHC.Maybe.Nothing @Int);
+ Just z_ayF [Dmd=S!P(S)] ->
+ case x_ayD of { I# x_aFe ->
+ case y_ayE of { I# y_aFh ->
+ case z_ayF of { I# y_X7 -> +# (+# x_aFe y_aFh) y_X7 }
+ }
+ }
+ }
+ }
+ };
+ SPEC2 -> case lvl_sFp of {}
+ }
+end Rec }
+
+-- RHS size: {terms: 13, types: 8, coercions: 0, joins: 0/0}
+hi [InlPrag=[2]]
+ :: SPEC -> Maybe Int -> Maybe Int -> Maybe Int -> Int
+[LclId,
+ Arity=4,
+ Str=<SL><SL><L><L>,
+ Cpr=1,
+ Unf=Unf{Src=StableSystem, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=ALWAYS_IF(arity=4,unsat_ok=True,boring_ok=False)
+ Tmpl= \ (ds_sFv [Occ=Once1, Dmd=SL] :: SPEC)
+ (ds_sFw [Occ=Once1, Dmd=SL] :: Maybe Int)
+ (ds_sFx [Occ=Once1] :: Maybe Int)
+ (ds_sFy [Occ=Once1] :: Maybe Int) ->
+ case $whi_sFB ds_sFv ds_sFw ds_sFx ds_sFy of ww_sFS [Occ=Once1]
+ { __DEFAULT ->
+ GHC.Types.I# ww_sFS
+ }}]
+hi
+ = \ (ds_sFv [Dmd=SL] :: SPEC)
+ (ds_sFw [Dmd=SL] :: Maybe Int)
+ (ds_sFx :: Maybe Int)
+ (ds_sFy :: Maybe Int) ->
+ case $whi_sFB ds_sFv ds_sFw ds_sFx ds_sFy of ww_sFS { __DEFAULT ->
+ GHC.Types.I# ww_sFS
+ }
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl_sFq :: Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=IF_ARGS [] 10 10}]
+lvl_sFq = GHC.Types.I# 4#
+
+-- RHS size: {terms: 2, types: 1, coercions: 0, joins: 0/0}
+lvl_sFr :: Maybe Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=IF_ARGS [] 10 10}]
+lvl_sFr = GHC.Maybe.Just @Int lvl_sFq
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl_sFs :: Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=IF_ARGS [] 10 10}]
+lvl_sFs = GHC.Types.I# 0#
+
+-- RHS size: {terms: 2, types: 1, coercions: 0, joins: 0/0}
+lvl_sFt :: Maybe Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=IF_ARGS [] 10 10}]
+lvl_sFt = GHC.Maybe.Just @Int lvl_sFs
+
+-- RHS size: {terms: 11, types: 3, coercions: 0, joins: 0/0}
+pat1 :: Int -> Int
+[LclIdX,
+ Arity=1,
+ Str=<L>,
+ Cpr=1,
+ Unf=Unf{Src=StableSystem, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=ALWAYS_IF(arity=1,unsat_ok=True,boring_ok=False)
+ Tmpl= \ (n_aBn [Occ=Once1] :: Int) ->
+ hi GHC.Types.SPEC (GHC.Maybe.Just @Int n_aBn) lvl_sFr lvl_sFt}]
+pat1
+ = \ (n_aBn :: Int) ->
+ case $whi_sFB
+ GHC.Types.SPEC (GHC.Maybe.Just @Int n_aBn) lvl_sFr lvl_sFt
+ of ww_sFS
+ { __DEFAULT ->
+ GHC.Types.I# ww_sFS
+ }
+
+-- RHS size: {terms: 11, types: 5, coercions: 0, joins: 0/0}
+pat2 :: Int -> Int
+[LclIdX,
+ Arity=1,
+ Str=<L>,
+ Cpr=1,
+ Unf=Unf{Src=StableSystem, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=ALWAYS_IF(arity=1,unsat_ok=True,boring_ok=False)
+ Tmpl= \ (n_aBo [Occ=Once1] :: Int) ->
+ hi
+ GHC.Types.SPEC
+ (GHC.Maybe.Nothing @Int)
+ (GHC.Maybe.Just @Int n_aBo)
+ (GHC.Maybe.Nothing @Int)}]
+pat2
+ = \ (n_aBo :: Int) ->
+ case $whi_sFB
+ GHC.Types.SPEC
+ (GHC.Maybe.Nothing @Int)
+ (GHC.Maybe.Just @Int n_aBo)
+ (GHC.Maybe.Nothing @Int)
+ of ww_sFS
+ { __DEFAULT ->
+ GHC.Types.I# ww_sFS
+ }
+
+-- RHS size: {terms: 11, types: 5, coercions: 0, joins: 0/0}
+pat3 :: Int -> Int
+[LclIdX,
+ Arity=1,
+ Str=<L>,
+ Cpr=1,
+ Unf=Unf{Src=StableSystem, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=ALWAYS_IF(arity=1,unsat_ok=True,boring_ok=False)
+ Tmpl= \ (n_aBp [Occ=Once1] :: Int) ->
+ hi
+ GHC.Types.SPEC
+ (GHC.Maybe.Nothing @Int)
+ (GHC.Maybe.Nothing @Int)
+ (GHC.Maybe.Just @Int n_aBp)}]
+pat3
+ = \ (n_aBp :: Int) ->
+ case $whi_sFB
+ GHC.Types.SPEC
+ (GHC.Maybe.Nothing @Int)
+ (GHC.Maybe.Nothing @Int)
+ (GHC.Maybe.Just @Int n_aBp)
+ of ww_sFS
+ { __DEFAULT ->
+ GHC.Types.I# ww_sFS
+ }
+
+-- RHS size: {terms: 12, types: 5, coercions: 0, joins: 0/0}
+pat4 :: Int -> Int
+[LclIdX,
+ Arity=1,
+ Str=<L>,
+ Cpr=1,
+ Unf=Unf{Src=StableSystem, TopLvl=True,
+ Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=ALWAYS_IF(arity=1,unsat_ok=True,boring_ok=False)
+ Tmpl= \ (n_aBq :: Int) ->
+ hi
+ GHC.Types.SPEC
+ (GHC.Maybe.Nothing @Int)
+ (GHC.Maybe.Just @Int n_aBq)
+ (GHC.Maybe.Just @Int n_aBq)}]
+pat4
+ = \ (n_aBq :: Int) ->
+ case $whi_sFB
+ GHC.Types.SPEC
+ (GHC.Maybe.Nothing @Int)
+ (GHC.Maybe.Just @Int n_aBq)
+ (GHC.Maybe.Just @Int n_aBq)
+ of ww_sFS
+ { __DEFAULT ->
+ GHC.Types.I# ww_sFS
+ }
+
+
+
=====================================
testsuite/tests/simplCore/should_compile/T23922a.hs
=====================================
@@ -0,0 +1,19 @@
+{-# OPTIONS_GHC -O -fworker-wrapper-cbv -dcore-lint -Wno-simplifiable-class-constraints #-}
+{-# LANGUAGE ScopedTypeVariables #-}
+
+-- It is very tricky to tickle this bug in 9.6/9.8!
+-- (It came up in a complicated program due to Mikolaj.)
+--
+-- We need a join point, with only dictionary arguments
+-- whose RHS is just another join-point application, which
+-- can be eta-reduced.
+--
+-- The -fworker-wrapper-cbv makes a wrapper whose RHS looks eta-reducible.
+
+module T23922a where
+
+f :: forall a. Eq a => [a] -> Bool
+f x = let {-# NOINLINE j #-}
+ j :: Eq [a] => Bool
+ j = x==x
+ in j
=====================================
testsuite/tests/simplCore/should_compile/all.T
=====================================
@@ -254,6 +254,7 @@ test('T13658', normal, compile, ['-dcore-lint'])
test('T14779a', normal, compile, ['-dcore-lint'])
test('T14779b', normal, compile, ['-dcore-lint'])
test('T13708', normal, compile, [''])
+test('T14003', [only_ways(['optasm']), grep_errmsg('SC:')], compile, ['-ddump-spec-constr'])
# thunk should inline here, so check whether or not it appears in the Core
# (we skip profasm because it might not inline there)
@@ -498,3 +499,4 @@ test('T23567', [extra_files(['T23567A.hs'])], multimod_compile, ['T23567', '-O -
test('T22404', [only_ways(['optasm']), check_errmsg(r'let') ], compile, ['-ddump-simpl -dsuppress-uniques'])
test('T23864', normal, compile, ['-O -dcore-lint -package ghc -Wno-gadt-mono-local-binds'])
test('T23938', [extra_files(['T23938A.hs'])], multimod_compile, ['T23938', '-O -v0'])
+test('T23922a', normal, compile, ['-O'])
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/9ad5ead064fbe99e60e65e07170785e1e4ee5e14...b56e0cd300f91899148dcd5654ef0a27676a654a
--
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/9ad5ead064fbe99e60e65e07170785e1e4ee5e14...b56e0cd300f91899148dcd5654ef0a27676a654a
You're receiving this email because of your account on gitlab.haskell.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-commits/attachments/20230913/32eca240/attachment-0001.html>
More information about the ghc-commits
mailing list