[Git][ghc/ghc][wip/marge_bot_batch_merge_job] 5 commits: DmdAnal: Annotate top-level function bindings with demands (#18894)
Marge Bot
gitlab at gitlab.haskell.org
Sun Dec 13 06:24:19 UTC 2020
Marge Bot pushed to branch wip/marge_bot_batch_merge_job at Glasgow Haskell Compiler / GHC
Commits:
5bd71bfd by Sebastian Graf at 2020-12-12T04:45:09-05:00
DmdAnal: Annotate top-level function bindings with demands (#18894)
It's useful to annotate a non-exported top-level function like `g` in
```hs
module Lib (h) where
g :: Int -> Int -> (Int,Int)
g m 1 = (m, 0)
g m n = (2 * m, 2 `div` n)
{-# NOINLINE g #-}
h :: Int -> Int
h 1 = 0
h m
| odd m = snd (g m 2)
| otherwise = uncurry (+) (g 2 m)
```
with its demand `UCU(CS(P(1P(U),SP(U))`, which tells us that whenever `g` was
called, the second component of the returned pair was evaluated strictly.
Since #18903 we do so for local functions, where we can see all calls.
For top-level functions, we can assume that all *exported* functions are
demanded according to `topDmd` and thus get sound demands for
non-exported top-level functions.
The demand on `g` is crucial information for Nested CPR, which may the
go on and unbox `g` for the second pair component. That is true even if
that pair component may diverge, as is the case for the call site `g 13
0`, which throws a div-by-zero exception.
In `T18894b`, you can even see the new demand annotation enabling us to
eta-expand a function that we wouldn't be able to eta-expand without
Call Arity.
We only track bindings of function type in order not to risk huge compile-time
regressions, see `isInterestingTopLevelFn`.
There was a CoreLint check that rejected strict demand annotations on
recursive or top-level bindings, which seems completely unjustified.
All the cases I investigated were fine, so I removed it.
Fixes #18894.
- - - - -
3aae036e by Sebastian Graf at 2020-12-12T04:45:09-05:00
Demand: Simplify `CU(U)` to `U` (#19005)
Both sub-demands encode the same information.
This is a trivial change and already affects a few regression tests
(e.g. `T5075`), so no separate regression test is necessary.
- - - - -
c6477639 by Adam Sandberg Ericsson at 2020-12-12T04:45:48-05:00
hadrian: correctly copy the docs dir into the bindist #18669
- - - - -
e033dd05 by Adam Sandberg Ericsson at 2020-12-12T10:52:19+00:00
mkDocs: support hadrian bindists #18973
- - - - -
9cc868bf by John Ericson at 2020-12-13T01:24:07-05:00
Remove old .travis.yml
- - - - -
21 changed files:
- − .travis.yml
- compiler/GHC/Core/FVs.hs
- compiler/GHC/Core/Lint.hs
- compiler/GHC/Core/Opt/DmdAnal.hs
- compiler/GHC/Core/Opt/Pipeline.hs
- compiler/GHC/Core/Opt/Simplify/Env.hs
- compiler/GHC/Core/Opt/WorkWrap.hs
- compiler/GHC/Core/Opt/WorkWrap/Utils.hs
- compiler/GHC/Core/Tidy.hs
- compiler/GHC/Types/Demand.hs
- compiler/GHC/Types/Id/Info.hs
- distrib/mkDocs/mkDocs
- hadrian/src/Rules/BinaryDist.hs
- testsuite/tests/arityanal/should_compile/Arity11.stderr
- testsuite/tests/arityanal/should_compile/Arity16.stderr
- + testsuite/tests/stranal/should_compile/T18894.hs
- + testsuite/tests/stranal/should_compile/T18894.stderr
- + testsuite/tests/stranal/should_compile/T18894b.hs
- + testsuite/tests/stranal/should_compile/T18894b.stderr
- testsuite/tests/stranal/should_compile/all.T
- testsuite/tests/stranal/sigs/T5075.stderr
Changes:
=====================================
.travis.yml deleted
=====================================
@@ -1,61 +0,0 @@
-# The following enables container-based travis instances
-sudo: false
-
-git:
- submodules: false
-
-env:
- - DEBUG_STAGE2=YES
- - DEBUG_STAGE2=NO
-
-# TODO. Install llvm once llvm's APT repository is working again.
-# See http://lists.llvm.org/pipermail/llvm-dev/2016-May/100303.html.
-addons:
- apt:
- sources:
- - hvr-ghc
- #- llvm-toolchain-precise-3.7
- - ubuntu-toolchain-r-test
- packages:
- - cabal-install-2.2
- - ghc-8.4.3
- - alex-3.1.7
- - happy-1.19.5
- - python3
- #- llvm-3.7
-
-before_install:
- - export PATH=/opt/ghc/8.4.3/bin:/opt/cabal/2.2/bin:/opt/alex/3.1.7/bin:/opt/happy/1.19.5/bin:/usr/lib/llvm-3.7/bin:$PATH
-
-# Be explicit about which protocol to use, such that we don't have to repeat the rewrite command for each.
- - git config remote.origin.url git://github.com/${TRAVIS_REPO_SLUG}.git
- - git config --global url."git://github.com/${TRAVIS_REPO_SLUG%/*}/packages-".insteadOf "git://github.com/${TRAVIS_REPO_SLUG%/*}/packages/"
- - git submodule --quiet init # Be quiet about these urls, as we may override them later.
-
-# Check if submodule repositories exist.
- - git config --get-regexp submodule.*.url | while read entry url; do git ls-remote "$url" dummyref 2>/dev/null && echo "$entry = $url" || git config --unset-all "$entry" ; done
-
-# Use github.com/ghc for those submodule repositories we couldn't connect to.
- - git config remote.origin.url git://github.com/ghc/ghc.git
- - git config --global url."git://github.com/ghc/packages-".insteadOf git://github.com/ghc/packages/
- - git submodule init # Don't be quiet, we want to show these urls.
- - git submodule --quiet update --recursive # Now we can be quiet again.
-
-script:
- # do not build docs
- - echo 'HADDOCK_DOCS = NO' >> mk/validate.mk
- - echo 'BUILD_SPHINX_HTML = NO' >> mk/validate.mk
- - echo 'BUILD_SPHINX_PDF = NO' >> mk/validate.mk
- # do not build dynamic libraries
- - echo 'DYNAMIC_GHC_PROGRAMS = NO' >> mk/validate.mk
- - echo 'GhcLibWays = v' >> mk/validate.mk
- - if [ "$DEBUG_STAGE2" = "YES" ]; then echo 'GhcStage2HcOpts += -DDEBUG' >> mk/validate.mk; fi
- # * Use --quiet, otherwise the build log might exceed the limit of 4
- # megabytes, causing Travis to kill our job.
- # * But use VERBOSE=2 (the default, but not when using --quiet) otherwise
- # the testsuite might not print output for over 10 minutes (more likely so
- # when DEBUG_STAGE2=NO), causing Travis to again kill our job.
- # * Use --fast, to stay within the time limits set by Travis.
- # See Note [validate and testsuite speed] in toplevel Makefile.
- # Actually, do not run test suite. Takes too long.
- - THREADS=3 SKIP_PERF_TESTS=YES VERBOSE=2 ./validate --fast --quiet --build-only
=====================================
compiler/GHC/Core/FVs.hs
=====================================
@@ -525,12 +525,11 @@ ruleLhsFVIds (Rule { ru_bndrs = bndrs, ru_args = args })
= filterFV isLocalId $ addBndrs bndrs (exprs_fvs args)
ruleRhsFreeIds :: CoreRule -> VarSet
--- ^ This finds all locally-defined free Ids on the left hand side of a rule
+-- ^ This finds all locally-defined free Ids on the right hand side of a rule
-- and returns them as a non-deterministic set
ruleRhsFreeIds (BuiltinRule {}) = emptyVarSet
-ruleRhsFreeIds (Rule { ru_bndrs = bndrs, ru_args = args })
- = fvVarSet $ filterFV isLocalId $
- addBndrs bndrs $ exprs_fvs args
+ruleRhsFreeIds (Rule { ru_bndrs = bndrs, ru_rhs = rhs })
+ = fvVarSet $ filterFV isLocalId $ addBndrs bndrs $ expr_fvs rhs
{-
Note [Rule free var hack] (Not a hack any more)
=====================================
compiler/GHC/Core/Lint.hs
=====================================
@@ -624,14 +624,6 @@ lintLetBind top_lvl rec_flag binder rhs rhs_ty
|| exprIsTickedString rhs)
(badBndrTyMsg binder (text "unlifted"))
- -- Check that if the binder is top-level or recursive, it's not
- -- demanded. Primitive string literals are exempt as there is no
- -- computation to perform, see Note [Core top-level string literals].
- ; checkL (not (isStrictId binder)
- || (isNonRec rec_flag && not (isTopLevel top_lvl))
- || exprIsTickedString rhs)
- (mkStrictMsg binder)
-
-- Check that if the binder is at the top level and has type Addr#,
-- that it is a string literal, see
-- Note [Core top-level string literals].
@@ -3120,13 +3112,6 @@ badBndrTyMsg binder what
= vcat [ text "The type of this binder is" <+> what <> colon <+> ppr binder
, text "Binder's type:" <+> ppr (idType binder) ]
-mkStrictMsg :: Id -> MsgDoc
-mkStrictMsg binder
- = vcat [hsep [text "Recursive or top-level binder has strict demand info:",
- ppr binder],
- hsep [text "Binder's demand info:", ppr (idDemandInfo binder)]
- ]
-
mkNonTopExportedMsg :: Id -> MsgDoc
mkNonTopExportedMsg binder
= hsep [text "Non-top-level binder is marked as exported:", ppr binder]
=====================================
compiler/GHC/Core/Opt/DmdAnal.hs
=====================================
@@ -37,6 +37,7 @@ import GHC.Core.Type
import GHC.Core.FVs ( exprFreeIds, ruleRhsFreeIds )
import GHC.Core.Coercion ( Coercion, coVarsOfCo )
import GHC.Core.FamInstEnv
+import GHC.Core.Opt.Arity ( typeArity )
import GHC.Utils.Misc
import GHC.Utils.Panic
import GHC.Data.Maybe ( isJust )
@@ -64,28 +65,55 @@ data DmdAnalOpts = DmdAnalOpts
--
-- Note: use `seqBinds` on the result to avoid leaks due to lazyness (cf Note
-- [Stamp out space leaks in demand analysis])
-dmdAnalProgram :: DmdAnalOpts -> FamInstEnvs -> CoreProgram -> CoreProgram
-dmdAnalProgram opts fam_envs binds = binds_plus_dmds
- where
- env = emptyAnalEnv opts fam_envs
- binds_plus_dmds = snd $ mapAccumL dmdAnalTopBind env binds
-
--- Analyse a (group of) top-level binding(s)
-dmdAnalTopBind :: AnalEnv
- -> CoreBind
- -> (AnalEnv, CoreBind)
-dmdAnalTopBind env (NonRec id rhs)
- = ( extendAnalEnv TopLevel env id sig
- , NonRec (setIdStrictness id sig) rhs')
+dmdAnalProgram :: DmdAnalOpts -> FamInstEnvs -> [CoreRule] -> CoreProgram -> CoreProgram
+dmdAnalProgram opts fam_envs rules binds
+ = snd $ go (emptyAnalEnv opts fam_envs) binds
where
- ( _, sig, rhs') = dmdAnalRhsLetDown Nothing env topSubDmd id rhs
+ -- See Note [Analysing top-level bindings]
+ -- and Note [Why care for top-level demand annotations?]
+ go _ [] = (nopDmdType, [])
+ go env (b:bs) = cons_up $ dmdAnalBind TopLevel env topSubDmd b anal_body
+ where
+ anal_body env'
+ | (body_ty, bs') <- go env' bs
+ = (add_exported_uses env' body_ty (bindersOf b), bs')
+
+ cons_up :: (a, b, [b]) -> (a, [b])
+ cons_up (dmd_ty, b', bs') = (dmd_ty, b':bs')
+
+ add_exported_uses :: AnalEnv -> DmdType -> [Id] -> DmdType
+ add_exported_uses env = foldl' (add_exported_use env)
+
+ -- | If @e@ is denoted by @dmd_ty@, then @add_exported_use _ dmd_ty id@
+ -- corresponds to the demand type of @(id, e)@, but is a lot more direct.
+ -- See Note [Analysing top-level bindings].
+ add_exported_use :: AnalEnv -> DmdType -> Id -> DmdType
+ add_exported_use env dmd_ty id
+ | isExportedId id || elemVarSet id rule_fvs
+ -- See Note [Absence analysis for stable unfoldings and RULES]
+ = dmd_ty `plusDmdType` fst (dmdAnalStar env topDmd (Var id))
+ | otherwise
+ = dmd_ty
-dmdAnalTopBind env (Rec pairs)
- = (env', Rec pairs')
- where
- (env', _, pairs') = dmdFix TopLevel env topSubDmd pairs
- -- We get two iterations automatically
- -- c.f. the NonRec case above
+ rule_fvs :: IdSet
+ rule_fvs = foldr (unionVarSet . ruleRhsFreeIds) emptyVarSet rules
+
+-- | We attach useful (e.g. not 'topDmd') 'idDemandInfo' to top-level bindings
+-- that satisfy this function.
+--
+-- Basically, we want to know how top-level *functions* are *used*
+-- (e.g. called). The information will always be lazy.
+-- Any other top-level bindings are boring.
+--
+-- See also Note [Why care for top-level demand annotations?].
+isInterestingTopLevelFn :: Id -> Bool
+-- SG tried to set this to True and got a +2% ghc/alloc regression in T5642
+-- (which is dominated by the Simplifier) at no gain in analysis precision.
+-- If there was a gain, that regression might be acceptable.
+-- Plus, we could use LetUp for thunks and share some code with local let
+-- bindings.
+isInterestingTopLevelFn id =
+ typeArity (idType id) `lengthExceeds` 0
{- Note [Stamp out space leaks in demand analysis]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -105,8 +133,80 @@ generation would hold on to an extra copy of the Core program, via
unforced thunks in demand or strictness information; and it is the
most memory-intensive part of the compilation process, so this added
seqBinds makes a big difference in peak memory usage.
--}
+Note [Analysing top-level bindings]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Consider a CoreProgram like
+ e1 = ...
+ n1 = ...
+ e2 = \a b -> ... fst (n1 a b) ...
+ n2 = \c d -> ... snd (e2 c d) ...
+ ...
+where e* are exported, but n* are not.
+Intuitively, we can see that @n1@ is only ever called with two arguments
+and in every call site, the first component of the result of the call
+is evaluated. Thus, we'd like it to have idDemandInfo @UCU(C1(P(SU,A))@.
+NB: We may *not* give e2 a similar annotation, because it is exported and
+external callers might use it in arbitrary ways, expressed by 'topDmd'.
+This can then be exploited by Nested CPR and eta-expansion,
+see Note [Why care for top-level demand annotations?].
+
+How do we get this result? Answer: By analysing the program as if it was a let
+expression of this form:
+ let e1 = ... in
+ let n1 = ... in
+ let e2 = ... in
+ let n2 = ... in
+ (e1,e2, ...)
+E.g. putting all bindings in nested lets and returning all exported binders in a tuple.
+Of course, we will not actually build that CoreExpr! Instead we faithfully
+simulate analysis of said expression by adding the free variable 'DmdEnv'
+of @e*@'s strictness signatures to the 'DmdType' we get from analysing the
+nested bindings.
+
+And even then the above form blows up analysis performance in T10370:
+If @e1@ uses many free variables, we'll unnecessarily carry their demands around
+with us from the moment we analyse the pair to the moment we bubble back up to
+the binding for @e1 at . So instead we analyse as if we had
+ let e1 = ... in
+ (e1, let n1 = ... in
+ ( let e2 = ... in
+ (e2, let n2 = ... in
+ ( ...))))
+That is, a series of right-nested pairs, where the @fst@ are the exported
+binders of the last enclosing let binding and @snd@ continues the nested
+lets.
+
+Variables occuring free in RULE RHSs are to be handled the same as exported Ids.
+See also Note [Absence analysis for stable unfoldings and RULES].
+
+Note [Why care for top-level demand annotations?]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Reading Note [Analysing top-level bindings], you might think that we go through
+quite some trouble to get useful demands for top-level bindings. They can never
+be strict, for example, so why bother?
+
+First, we get to eta-expand top-level bindings that we weren't able to
+eta-expand before without Call Arity. From T18894b:
+ module T18894b (f) where
+ eta :: Int -> Int -> Int
+ eta x = if fst (expensive x) == 13 then \y -> ... else \y -> ...
+ f m = ... eta m 2 ... eta 2 m ...
+Since only @f@ is exported, we see all call sites of @eta@ and can eta-expand to
+arity 2.
+
+The call demands we get for some top-level bindings will also allow Nested CPR
+to unbox deeper. From T18894:
+ module T18894 (h) where
+ g m n = (2 * m, 2 `div` n)
+ {-# NOINLINE g #-}
+ h :: Int -> Int
+ h m = ... snd (g m 2) ... uncurry (+) (g 2 m) ...
+Only @h@ is exported, hence we see that @g@ is always called in contexts were we
+also force the division in the second component of the pair returned by @g at .
+This allows Nested CPR to evalute the division eagerly and return an I# in its
+position.
+-}
{-
************************************************************************
@@ -114,7 +214,103 @@ seqBinds makes a big difference in peak memory usage.
\subsection{The analyser itself}
* *
************************************************************************
+-}
+
+-- | Analyse a binding group and its \"body\", e.g. where it is in scope.
+--
+-- It calls a function that knows how to analyse this \"body\" given
+-- an 'AnalEnv' with updated demand signatures for the binding group
+-- (reflecting their 'idStrictnessInfo') and expects to receive a
+-- 'DmdType' in return, which it uses to annotate the binding group with their
+-- 'idDemandInfo'.
+dmdAnalBind
+ :: TopLevelFlag
+ -> AnalEnv
+ -> SubDemand -- ^ Demand put on the "body"
+ -- (important for join points)
+ -> CoreBind
+ -> (AnalEnv -> (DmdType, a)) -- ^ How to analyse the "body", e.g.
+ -- where the binding is in scope
+ -> (DmdType, CoreBind, a)
+dmdAnalBind top_lvl env dmd bind anal_body = case bind of
+ NonRec id rhs
+ | useLetUp top_lvl id
+ -> dmdAnalBindLetUp top_lvl env id rhs anal_body
+ _ -> dmdAnalBindLetDown top_lvl env dmd bind anal_body
+
+-- | Annotates uninteresting top level functions ('isInterestingTopLevelFn')
+-- with 'topDmd', the rest with the given demand.
+setBindIdDemandInfo :: TopLevelFlag -> Id -> Demand -> Id
+setBindIdDemandInfo top_lvl id dmd = setIdDemandInfo id $ case top_lvl of
+ TopLevel | not (isInterestingTopLevelFn id) -> topDmd
+ _ -> dmd
+
+-- | Let bindings can be processed in two ways:
+-- Down (RHS before body) or Up (body before RHS).
+-- This function handles the up variant.
+--
+-- It is very simple. For let x = rhs in body
+-- * Demand-analyse 'body' in the current environment
+-- * Find the demand, 'rhs_dmd' placed on 'x' by 'body'
+-- * Demand-analyse 'rhs' in 'rhs_dmd'
+--
+-- This is used for a non-recursive local let without manifest lambdas (see
+-- 'useLetUp').
+--
+-- This is the LetUp rule in the paper “Higher-Order Cardinality Analysis”.
+dmdAnalBindLetUp :: TopLevelFlag -> AnalEnv -> Id -> CoreExpr -> (AnalEnv -> (DmdType, a)) -> (DmdType, CoreBind, a)
+dmdAnalBindLetUp top_lvl env id rhs anal_body = (final_ty, NonRec id' rhs', body')
+ where
+ (body_ty, body') = anal_body env
+ (body_ty', id_dmd) = findBndrDmd env notArgOfDfun body_ty id
+ id' = setBindIdDemandInfo top_lvl id id_dmd
+ (rhs_ty, rhs') = dmdAnalStar env (dmdTransformThunkDmd rhs id_dmd) rhs
+ final_ty = body_ty' `plusDmdType` rhs_ty
+
+-- | Let bindings can be processed in two ways:
+-- Down (RHS before body) or Up (body before RHS).
+-- This function handles the down variant.
+--
+-- It computes a demand signature (by means of 'dmdAnalRhsSig') and uses
+-- that at call sites in the body.
+--
+-- It is used for toplevel definitions, recursive definitions and local
+-- non-recursive definitions that have manifest lambdas (cf. 'useLetUp').
+-- Local non-recursive definitions without a lambda are handled with LetUp.
+--
+-- This is the LetDown rule in the paper “Higher-Order Cardinality Analysis”.
+dmdAnalBindLetDown :: TopLevelFlag -> AnalEnv -> SubDemand -> CoreBind -> (AnalEnv -> (DmdType, a)) -> (DmdType, CoreBind, a)
+dmdAnalBindLetDown top_lvl env dmd bind anal_body = case bind of
+ NonRec id rhs
+ | (env', lazy_fv, id1, rhs1) <-
+ dmdAnalRhsSig top_lvl NonRecursive env dmd id rhs
+ -> do_rest env' lazy_fv [(id1, rhs1)] (uncurry NonRec . only)
+ Rec pairs
+ | (env', lazy_fv, pairs') <- dmdFix top_lvl env dmd pairs
+ -> do_rest env' lazy_fv pairs' Rec
+ where
+ do_rest env' lazy_fv pairs1 build_bind = (final_ty, build_bind pairs2, body')
+ where
+ (body_ty, body') = anal_body env'
+ -- see Note [Lazy and unleashable free variables]
+ dmd_ty = addLazyFVs body_ty lazy_fv
+ (!final_ty, id_dmds) = findBndrsDmds env' dmd_ty (map fst pairs1)
+ pairs2 = zipWith do_one pairs1 id_dmds
+ do_one (id', rhs') dmd = (setBindIdDemandInfo top_lvl id' dmd, rhs')
+ -- If the actual demand is better than the vanilla call
+ -- demand, you might think that we might do better to re-analyse
+ -- the RHS with the stronger demand.
+ -- But (a) That seldom happens, because it means that *every* path in
+ -- the body of the let has to use that stronger demand
+ -- (b) It often happens temporarily in when fixpointing, because
+ -- the recursive function at first seems to place a massive demand.
+ -- But we don't want to go to extra work when the function will
+ -- probably iterate to something less demanding.
+ -- In practice, all the times the actual demand on id2 is more than
+ -- the vanilla call demand seem to be due to (b). So we don't
+ -- bother to re-analyse the RHS.
+{-
Note [Ensure demand is strict]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It's important not to analyse e with a lazy demand because
@@ -194,7 +390,7 @@ dmdAnal' env dmd (App fun arg)
-- Crucially, coercions /are/ handled here, because they are
-- value arguments (#10288)
let
- call_dmd = mkCallDmd dmd
+ call_dmd = mkCalledOnceDmd dmd
(fun_ty, fun') = dmdAnal env call_dmd fun
(arg_dmd, res_ty) = splitDmdTy fun_ty
(arg_ty, arg') = dmdAnalStar env (dmdTransformThunkDmd arg arg_dmd) arg
@@ -295,60 +491,11 @@ dmdAnal' env dmd (Case scrut case_bndr ty alts)
-- , text "res_ty" <+> ppr res_ty ]) $
(res_ty, Case scrut' case_bndr' ty alts')
--- Let bindings can be processed in two ways:
--- Down (RHS before body) or Up (body before RHS).
--- The following case handle the up variant.
---
--- It is very simple. For let x = rhs in body
--- * Demand-analyse 'body' in the current environment
--- * Find the demand, 'rhs_dmd' placed on 'x' by 'body'
--- * Demand-analyse 'rhs' in 'rhs_dmd'
---
--- This is used for a non-recursive local let without manifest lambdas.
--- This is the LetUp rule in the paper “Higher-Order Cardinality Analysis”.
-dmdAnal' env dmd (Let (NonRec id rhs) body)
- | useLetUp id
- = (final_ty, Let (NonRec id' rhs') body')
- where
- (body_ty, body') = dmdAnal env dmd body
- (body_ty', id_dmd) = findBndrDmd env notArgOfDfun body_ty id
- id' = setIdDemandInfo id id_dmd
-
- (rhs_ty, rhs') = dmdAnalStar env (dmdTransformThunkDmd rhs id_dmd) rhs
- final_ty = body_ty' `plusDmdType` rhs_ty
-
-dmdAnal' env dmd (Let (NonRec id rhs) body)
- = (body_ty2, Let (NonRec id2 rhs') body')
+dmdAnal' env dmd (Let bind body)
+ = (final_ty, Let bind' body')
where
- (lazy_fv, sig, rhs') = dmdAnalRhsLetDown Nothing env dmd id rhs
- id1 = setIdStrictness id sig
- env1 = extendAnalEnv NotTopLevel env id sig
- (body_ty, body') = dmdAnal env1 dmd body
- (body_ty1, id2) = annotateBndr env body_ty id1
- body_ty2 = addLazyFVs body_ty1 lazy_fv -- see Note [Lazy and unleashable free variables]
-
- -- If the actual demand is better than the vanilla call
- -- demand, you might think that we might do better to re-analyse
- -- the RHS with the stronger demand.
- -- But (a) That seldom happens, because it means that *every* path in
- -- the body of the let has to use that stronger demand
- -- (b) It often happens temporarily in when fixpointing, because
- -- the recursive function at first seems to place a massive demand.
- -- But we don't want to go to extra work when the function will
- -- probably iterate to something less demanding.
- -- In practice, all the times the actual demand on id2 is more than
- -- the vanilla call demand seem to be due to (b). So we don't
- -- bother to re-analyse the RHS.
-
-dmdAnal' env dmd (Let (Rec pairs) body)
- = let
- (env', lazy_fv, pairs') = dmdFix NotTopLevel env dmd pairs
- (body_ty, body') = dmdAnal env' dmd body
- body_ty1 = deleteFVs body_ty (map fst pairs)
- body_ty2 = addLazyFVs body_ty1 lazy_fv -- see Note [Lazy and unleashable free variables]
- in
- body_ty2 `seq`
- (body_ty2, Let (Rec pairs') body')
+ (final_ty, bind', body') = dmdAnalBind NotTopLevel env dmd bind go'
+ go' env' = dmdAnal env' dmd body
-- | A simple, syntactic analysis of whether an expression MAY throw a precise
-- exception when evaluated. It's always sound to return 'True'.
@@ -582,9 +729,17 @@ dmdTransform env var dmd
| Just (sig, top_lvl) <- lookupSigEnv env var
, let fn_ty = dmdTransformSig sig dmd
= -- pprTrace "dmdTransform:LetDown" (vcat [ppr var, ppr sig, ppr dmd, ppr fn_ty]) $
- if isTopLevel top_lvl
- then fn_ty -- Don't record demand on top-level things
- else addVarDmd fn_ty var (C_11 :* dmd)
+ case top_lvl of
+ NotTopLevel -> addVarDmd fn_ty var (C_11 :* dmd)
+ TopLevel
+ | isInterestingTopLevelFn var
+ -- Top-level things will be used multiple times or not at
+ -- all anyway, hence the multDmd below: It means we don't
+ -- have to track whether @var@ is used strictly or at most
+ -- once, because ultimately it never will.
+ -> addVarDmd fn_ty var (C_0N `multDmd` (C_11 :* dmd)) -- discard strictness
+ | otherwise
+ -> fn_ty -- don't bother tracking; just annotate with 'topDmd' later
-- Everything else:
-- * Local let binders for which we use LetUp (cf. 'useLetUp')
-- * Lambda binders
@@ -599,46 +754,46 @@ dmdTransform env var dmd
* *
********************************************************************* -}
--- Let bindings can be processed in two ways:
--- Down (RHS before body) or Up (body before RHS).
--- dmdAnalRhsLetDown implements the Down variant:
--- * assuming a demand of <L,U>
+-- | @dmdAnalRhsSig@ analyses the given RHS to compute a demand signature
+-- for the LetDown rule. It works as follows:
+--
+-- * assuming a demand of <U>
-- * looking at the definition
-- * determining a strictness signature
--
--- It is used for toplevel definition, recursive definitions and local
--- non-recursive definitions that have manifest lambdas.
--- Local non-recursive definitions without a lambda are handled with LetUp.
---
--- This is the LetDown rule in the paper “Higher-Order Cardinality Analysis”.
-dmdAnalRhsLetDown
- :: Maybe [Id] -- Just bs <=> recursive, Nothing <=> non-recursive
+-- Since it assumed a demand of <U>, the resulting signature is applicable at
+-- any call site.
+dmdAnalRhsSig
+ :: TopLevelFlag
+ -> RecFlag
-> AnalEnv -> SubDemand
-> Id -> CoreExpr
- -> (DmdEnv, StrictSig, CoreExpr)
+ -> (AnalEnv, DmdEnv, Id, CoreExpr)
-- Process the RHS of the binding, add the strictness signature
-- to the Id, and augment the environment with the signature as well.
-- See Note [NOINLINE and strictness]
-dmdAnalRhsLetDown rec_flag env let_dmd id rhs
- = -- pprTrace "dmdAnalRhsLetDown" (ppr id $$ ppr let_dmd $$ ppr sig $$ ppr lazy_fv) $
- (lazy_fv, sig, rhs')
+dmdAnalRhsSig top_lvl rec_flag env let_dmd id rhs
+ = -- pprTrace "dmdAnalRhsSig" (ppr id $$ ppr let_dmd $$ ppr sig $$ ppr lazy_fv) $
+ (env', lazy_fv, id', rhs')
where
rhs_arity = idArity id
+ -- See Note [Demand signatures are computed for a threshold demand based on idArity]
rhs_dmd -- See Note [Demand analysis for join points]
-- See Note [Invariants on join points] invariant 2b, in GHC.Core
-- rhs_arity matches the join arity of the join point
| isJoinId id
- = mkCallDmds rhs_arity let_dmd
+ = mkCalledOnceDmds rhs_arity let_dmd
| otherwise
- -- NB: rhs_arity
- -- See Note [Demand signatures are computed for a threshold demand based on idArity]
- = mkRhsDmd env rhs_arity rhs
+ = mkCalledOnceDmds rhs_arity topSubDmd
(rhs_dmd_ty, rhs') = dmdAnal env rhs_dmd rhs
DmdType rhs_fv rhs_dmds rhs_div = rhs_dmd_ty
sig = mkStrictSigForArity rhs_arity (DmdType sig_fv rhs_dmds rhs_div)
+ id' = id `setIdStrictness` sig
+ env' = extendAnalEnv top_lvl env id' sig
+
-- See Note [Aggregated demand for cardinality]
-- FIXME: That Note doesn't explain the following lines at all. The reason
-- is really much different: When we have a recursive function, we'd
@@ -651,8 +806,8 @@ dmdAnalRhsLetDown rec_flag env let_dmd id rhs
-- we'd have to do an additional iteration. reuseEnv makes sure that
-- we never get used-once info for FVs of recursive functions.
rhs_fv1 = case rec_flag of
- Just bs -> reuseEnv (delVarEnvList rhs_fv bs)
- Nothing -> rhs_fv
+ Recursive -> reuseEnv rhs_fv
+ NonRecursive -> rhs_fv
rhs_fv2 = rhs_fv1 `keepAliveDmdEnv` extra_fvs
-- Find the RHS free vars of the unfoldings and RULES
@@ -669,13 +824,6 @@ dmdAnalRhsLetDown rec_flag env let_dmd id rhs
= exprFreeIds unf_body
| otherwise = emptyVarSet
--- | @mkRhsDmd env rhs_arity rhs@ creates a 'SubDemand' for
--- unleashing on the given function's @rhs@, by creating
--- a call demand of @rhs_arity@
--- See Historical Note [Product demands for function body]
-mkRhsDmd :: AnalEnv -> Arity -> CoreExpr -> SubDemand
-mkRhsDmd _env rhs_arity _rhs = mkCallDmds rhs_arity topSubDmd
-
-- | If given the (local, non-recursive) let-bound 'Id', 'useLetUp' determines
-- whether we should process the binding up (body before rhs) or down (rhs
-- before body).
@@ -720,8 +868,8 @@ mkRhsDmd _env rhs_arity _rhs = mkCallDmds rhs_arity topSubDmd
-- * For a more convincing example with join points, see Note [Demand analysis
-- for join points].
--
-useLetUp :: Var -> Bool
-useLetUp f = idArity f == 0 && not (isJoinId f)
+useLetUp :: TopLevelFlag -> Var -> Bool
+useLetUp top_lvl f = isNotTopLevel top_lvl && idArity f == 0 && not (isJoinId f)
{- Note [Demand analysis for join points]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -939,8 +1087,6 @@ dmdFix :: TopLevelFlag
dmdFix top_lvl env let_dmd orig_pairs
= loop 1 initial_pairs
where
- bndrs = map fst orig_pairs
-
-- See Note [Initialising strictness]
initial_pairs | ae_virgin env = [(setIdStrictness id botSig, rhs) | (id, rhs) <- orig_pairs ]
| otherwise = orig_pairs
@@ -990,10 +1136,8 @@ dmdFix top_lvl env let_dmd orig_pairs
= -- pprTrace "my_downRhs" (ppr id $$ ppr (idStrictness id) $$ ppr sig) $
((env', lazy_fv'), (id', rhs'))
where
- (lazy_fv1, sig, rhs') = dmdAnalRhsLetDown (Just bndrs) env let_dmd id rhs
- lazy_fv' = plusVarEnv_C plusDmd lazy_fv lazy_fv1
- env' = extendAnalEnv top_lvl env id sig
- id' = setIdStrictness id sig
+ (env', lazy_fv1, id', rhs') = dmdAnalRhsSig top_lvl Recursive env let_dmd id rhs
+ lazy_fv' = plusVarEnv_C plusDmd lazy_fv lazy_fv1
zapIdStrictness :: [(Id, CoreExpr)] -> [(Id, CoreExpr)]
zapIdStrictness pairs = [(setIdStrictness id nopSig, rhs) | (id, rhs) <- pairs ]
@@ -1090,7 +1234,7 @@ addLazyFVs dmd_ty lazy_fvs
-- demand with the bottom coming up from 'error'
--
-- I got a loop in the fixpointer without this, due to an interaction
- -- with the lazy_fv filtering in dmdAnalRhsLetDown. Roughly, it was
+ -- with the lazy_fv filtering in dmdAnalRhsSig. Roughly, it was
-- letrec f n x
-- = letrec g y = x `fatbar`
-- letrec h z = z + ...g...
@@ -1156,10 +1300,6 @@ annotateLamIdBndr env arg_of_dfun dmd_ty id
main_ty = addDemand dmd dmd_ty'
(dmd_ty', dmd) = findBndrDmd env arg_of_dfun dmd_ty id
-deleteFVs :: DmdType -> [Var] -> DmdType
-deleteFVs (DmdType fvs dmds res) bndrs
- = DmdType (delVarEnvList fvs bndrs) dmds res
-
{-
Note [NOINLINE and strictness]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
=====================================
compiler/GHC/Core/Opt/Pipeline.hs
=====================================
@@ -66,6 +66,7 @@ import GHC.Types.SrcLoc
import GHC.Types.Id
import GHC.Types.Id.Info
import GHC.Types.Basic
+import GHC.Types.Demand ( zapDmdEnvSig )
import GHC.Types.Var.Set
import GHC.Types.Var.Env
import GHC.Types.Unique.Supply ( UniqSupply, mkSplitUniqSupply, splitUniqSupply )
@@ -499,7 +500,7 @@ doCorePass CoreDoExitify = {-# SCC "Exitify" #-}
doPass exitifyProgram
doCorePass CoreDoDemand = {-# SCC "DmdAnal" #-}
- doPassDFM dmdAnal
+ doPassDFRM dmdAnal
doCorePass CoreDoCpr = {-# SCC "CprAnal" #-}
doPassDFM cprAnalProgram
@@ -582,6 +583,13 @@ doPassDFM do_pass guts = do
let fam_envs = (p_fam_env, mg_fam_inst_env guts)
doPassM (liftIO . do_pass dflags fam_envs) guts
+doPassDFRM :: (DynFlags -> FamInstEnvs -> [CoreRule] -> CoreProgram -> IO CoreProgram) -> ModGuts -> CoreM ModGuts
+doPassDFRM do_pass guts = do
+ dflags <- getDynFlags
+ p_fam_env <- getPackageFamInstEnv
+ let fam_envs = (p_fam_env, mg_fam_inst_env guts)
+ doPassM (liftIO . do_pass dflags fam_envs (mg_rules guts)) guts
+
doPassDFU :: (DynFlags -> FamInstEnvs -> UniqSupply -> CoreProgram -> CoreProgram) -> ModGuts -> CoreM ModGuts
doPassDFU do_pass guts = do
dflags <- getDynFlags
@@ -1095,13 +1103,13 @@ transferIdInfo exported_id local_id
-dmdAnal :: DynFlags -> FamInstEnvs -> CoreProgram -> IO CoreProgram
-dmdAnal dflags fam_envs binds = do
+dmdAnal :: DynFlags -> FamInstEnvs -> [CoreRule] -> CoreProgram -> IO CoreProgram
+dmdAnal dflags fam_envs rules binds = do
let opts = DmdAnalOpts
{ dmd_strict_dicts = gopt Opt_DictsStrict dflags
}
- binds_plus_dmds = dmdAnalProgram opts fam_envs binds
+ binds_plus_dmds = dmdAnalProgram opts fam_envs rules binds
Err.dumpIfSet_dyn dflags Opt_D_dump_str_signatures "Strictness signatures" FormatText $
- dumpIdInfoOfProgram (ppr . strictnessInfo) binds_plus_dmds
+ dumpIdInfoOfProgram (ppr . zapDmdEnvSig . strictnessInfo) binds_plus_dmds
-- See Note [Stamp out space leaks in demand analysis] in GHC.Core.Opt.DmdAnal
seqBinds binds_plus_dmds `seq` return binds_plus_dmds
=====================================
compiler/GHC/Core/Opt/Simplify/Env.hs
=====================================
@@ -598,11 +598,10 @@ addJoinFlts = appOL
mkRecFloats :: SimplFloats -> SimplFloats
-- Flattens the floats into a single Rec group,
-- They must either all be lifted LetFloats or all JoinFloats
-mkRecFloats floats@(SimplFloats { sfLetFloats = LetFloats bs ff
+mkRecFloats floats@(SimplFloats { sfLetFloats = LetFloats bs _ff
, sfJoinFloats = jbs
, sfInScope = in_scope })
- = ASSERT2( case ff of { FltLifted -> True; _ -> False }, ppr (fromOL bs) )
- ASSERT2( isNilOL bs || isNilOL jbs, ppr floats )
+ = ASSERT2( isNilOL bs || isNilOL jbs, ppr floats )
SimplFloats { sfLetFloats = floats'
, sfJoinFloats = jfloats'
, sfInScope = in_scope }
=====================================
compiler/GHC/Core/Opt/WorkWrap.hs
=====================================
@@ -484,7 +484,7 @@ tryWW dflags fam_envs is_rec fn_id rhs
| is_fun && is_eta_exp
= splitFun dflags fam_envs new_fn_id fn_info wrap_dmds div cpr rhs
- | is_thunk -- See Note [Thunk splitting]
+ | isNonRec is_rec, is_thunk -- See Note [Thunk splitting]
= splitThunk dflags fam_envs is_rec new_fn_id rhs
| otherwise
@@ -777,6 +777,10 @@ Notice that x certainly has the CPR property now!
In fact, splitThunk uses the function argument w/w splitting
function, so that if x's demand is deeper (say U(U(L,L),L))
then the splitting will go deeper too.
+
+NB: For recursive thunks, the Simplifier is unable to float `x-rhs` out of
+`x*`'s RHS, because `x*` occurs freely in `x-rhs`, and will just change it
+back to the original definition, so we just split non-recursive thunks.
-}
-- See Note [Thunk splitting]
=====================================
compiler/GHC/Core/Opt/WorkWrap/Utils.hs
=====================================
@@ -1275,10 +1275,14 @@ mk_absent_let dflags fam_envs arg
abs_rhs = mkAbsentErrorApp arg_ty msg
msg = showSDoc (gopt_set dflags Opt_SuppressUniques)
- (ppr arg <+> ppr (idType arg) <+> file_msg)
+ (vcat
+ [ text "Arg:" <+> ppr arg
+ , text "Type:" <+> ppr arg_ty
+ , file_msg
+ ])
file_msg = case outputFile dflags of
Nothing -> empty
- Just f -> text "in output file " <+> quotes (text f)
+ Just f -> text "In output file " <+> quotes (text f)
-- We need to suppress uniques here because otherwise they'd
-- end up in the generated code as strings. This is bad for
-- determinism, because with different uniques the strings
=====================================
compiler/GHC/Core/Tidy.hs
=====================================
@@ -21,7 +21,7 @@ import GHC.Core
import GHC.Core.Seq ( seqUnfolding )
import GHC.Types.Id
import GHC.Types.Id.Info
-import GHC.Types.Demand ( zapUsageEnvSig )
+import GHC.Types.Demand ( zapDmdEnvSig )
import GHC.Core.Type ( tidyType, tidyVarBndr )
import GHC.Core.Coercion ( tidyCo )
import GHC.Types.Var
@@ -206,7 +206,7 @@ tidyLetBndr rec_tidy_env env@(tidy_env, var_env) id
new_info = vanillaIdInfo
`setOccInfo` occInfo old_info
`setArityInfo` arityInfo old_info
- `setStrictnessInfo` zapUsageEnvSig (strictnessInfo old_info)
+ `setStrictnessInfo` zapDmdEnvSig (strictnessInfo old_info)
`setDemandInfo` demandInfo old_info
`setInlinePragInfo` inlinePragInfo old_info
`setUnfoldingInfo` new_unf
=====================================
compiler/GHC/Types/Demand.hs
=====================================
@@ -34,7 +34,7 @@ module GHC.Types.Demand (
lazyApply1Dmd, lazyApply2Dmd, strictOnceApply1Dmd, strictManyApply1Dmd,
-- ** Other @Demand@ operations
oneifyCard, oneifyDmd, strictifyDmd, strictifyDictDmd, mkWorkerDemand,
- peelCallDmd, peelManyCalls, mkCallDmd, mkCallDmds,
+ peelCallDmd, peelManyCalls, mkCalledOnceDmd, mkCalledOnceDmds,
addCaseBndrDmd,
-- ** Extracting one-shot information
argOneShots, argsOneShots, saturatedByOneShots,
@@ -73,7 +73,7 @@ module GHC.Types.Demand (
seqDemand, seqDemandList, seqDmdType, seqStrictSig,
-- * Zapping usage information
- zapUsageDemand, zapUsageEnvSig, zapUsedOnceDemand, zapUsedOnceSig
+ zapUsageDemand, zapDmdEnvSig, zapUsedOnceDemand, zapUsedOnceSig
) where
#include "HsVersions.h"
@@ -267,9 +267,11 @@ data SubDemand
-- with the specified cardinality at every level.
-- Expands to 'Call' via 'viewCall' and to 'Prod' via 'viewProd'.
--
- -- @Poly n@ is semantically equivalent to @nP(n,n,...)@ or @Cn(Cn(..Cn(n)))@.
- -- So @U === UP(U,U,...)@ and @U === CU(CU(..CU(U)))@,
- -- @S === SP(S,S,...)@ and @S === CS(CS(..CS(S)))@, and so on.
+ -- @Poly n@ is semantically equivalent to @Prod [n :* Poly n, ...]@ or
+ -- @Call n (Poly n)@. 'mkCall' and 'mkProd' do these rewrites.
+ --
+ -- In Note [Demand notation]: @U === P(U,U,...)@ and @U === CU(U)@,
+ -- @S === P(S,S,...)@ and @S === CS(S)@, and so on.
--
-- We only really use 'Poly' with 'C_10' (bottom), 'C_00' (absent),
-- 'C_0N' (top) and sometimes 'C_1N', but it's simpler to treat it uniformly
@@ -278,7 +280,8 @@ data SubDemand
-- ^ @Call n sd@ describes the evaluation context of @n@ function
-- applications, where every individual result is evaluated according to @sd at .
-- @sd@ is /relative/ to a single call, cf. Note [Call demands are relative].
- -- Used only for values of function type.
+ -- Used only for values of function type. Use the smart constructor 'mkCall'
+ -- whenever possible!
| Prod ![Demand]
-- ^ @Prod ds@ describes the evaluation context of a case scrutinisation
-- on an expression of product type, where the product components are
@@ -306,7 +309,7 @@ polyDmd C_1N = C_1N :* poly1N
polyDmd C_10 = C_10 :* poly10
-- | A smart constructor for 'Prod', applying rewrite rules along the semantic
--- equalities @Prod [polyDmd n, ...] === polyDmd n@, simplifying to 'Poly'
+-- equality @Prod [polyDmd n, ...] === polyDmd n@, simplifying to 'Poly'
-- 'SubDemand's when possible. Note that this degrades boxity information! E.g. a
-- polymorphic demand will never unbox.
mkProd :: [Demand] -> SubDemand
@@ -335,6 +338,13 @@ viewProd _ _ = Nothing
{-# INLINE viewProd #-} -- we want to fuse away the replicate and the allocation
-- for Arity. Otherwise, #18304 bites us.
+-- | A smart constructor for 'Call', applying rewrite rules along the semantic
+-- equality @Call n (Poly n) === Poly n@, simplifying to 'Poly' 'SubDemand's
+-- when possible.
+mkCall :: Card -> SubDemand -> SubDemand
+mkCall n cd@(Poly m) | n == m = cd
+mkCall n cd = Call n cd
+
-- | @viewCall sd@ interprets @sd@ as a 'Call', expanding 'Poly' demands as
-- necessary.
viewCall :: SubDemand -> Maybe (Card, SubDemand)
@@ -356,8 +366,9 @@ lubSubDmd (Prod ds1) (viewProd (length ds1) -> Just ds2) =
-- Handle Call
lubSubDmd (Call n1 d1) (viewCall -> Just (n2, d2))
-- See Note [Call demands are relative]
- | isAbs n2 = Call (lubCard n1 n2) (lubSubDmd d1 botSubDmd)
- | otherwise = Call (lubCard n1 n2) (lubSubDmd d1 d2)
+ | isAbs n1 = mkCall (lubCard n1 n2) (lubSubDmd botSubDmd d2)
+ | isAbs n2 = mkCall (lubCard n1 n2) (lubSubDmd d1 botSubDmd)
+ | otherwise = mkCall (lubCard n1 n2) (lubSubDmd d1 d2)
-- Handle Poly
lubSubDmd (Poly n1) (Poly n2) = Poly (lubCard n1 n2)
-- Make use of reflexivity (so we'll match the Prod or Call cases again).
@@ -367,7 +378,7 @@ lubSubDmd _ _ = topSubDmd
-- | Denotes '∪' on 'Demand'.
lubDmd :: Demand -> Demand -> Demand
-lubDmd (n1 :* sd1) (n2 :* sd2) = lubCard n1 n2 :* lubSubDmd sd1 sd2
+lubDmd (n1 :* sd1) (n2 :* sd2) = lubCard n1 n2 :* lubSubDmd sd1 sd2
-- | Denotes '+' on 'SubDemand'.
plusSubDmd :: SubDemand -> SubDemand -> SubDemand
@@ -377,8 +388,9 @@ plusSubDmd (Prod ds1) (viewProd (length ds1) -> Just ds2) =
-- Handle Call
plusSubDmd (Call n1 d1) (viewCall -> Just (n2, d2))
-- See Note [Call demands are relative]
- | isAbs n2 = Call (plusCard n1 n2) (lubSubDmd d1 botSubDmd)
- | otherwise = Call (plusCard n1 n2) (lubSubDmd d1 d2)
+ | isAbs n1 = mkCall (plusCard n1 n2) (lubSubDmd botSubDmd d2)
+ | isAbs n2 = mkCall (plusCard n1 n2) (lubSubDmd d1 botSubDmd)
+ | otherwise = mkCall (plusCard n1 n2) (lubSubDmd d1 d2)
-- Handle Poly
plusSubDmd (Poly n1) (Poly n2) = Poly (plusCard n1 n2)
-- Make use of reflexivity (so we'll match the Prod or Call cases again).
@@ -407,7 +419,7 @@ multSubDmd :: Card -> SubDemand -> SubDemand
multSubDmd n sd
| Just sd' <- multTrivial n seqSubDmd sd = sd'
multSubDmd n (Poly n') = Poly (multCard n n')
-multSubDmd n (Call n' sd) = Call (multCard n n') sd -- See Note [Call demands are relative]
+multSubDmd n (Call n' sd) = mkCall (multCard n n') sd -- See Note [Call demands are relative]
multSubDmd n (Prod ds) = Prod (map (multDmd n) ds)
multDmd :: Card -> Demand -> Demand
@@ -457,22 +469,22 @@ evalDmd = C_1N :* topSubDmd
-- | First argument of 'GHC.Exts.maskAsyncExceptions#': @SCS(U)@.
-- Called exactly once.
strictOnceApply1Dmd :: Demand
-strictOnceApply1Dmd = C_11 :* Call C_11 topSubDmd
+strictOnceApply1Dmd = C_11 :* mkCall C_11 topSubDmd
-- | First argument of 'GHC.Exts.atomically#': @MCM(U)@.
-- Called at least once, possibly many times.
strictManyApply1Dmd :: Demand
-strictManyApply1Dmd = C_1N :* Call C_1N topSubDmd
+strictManyApply1Dmd = C_1N :* mkCall C_1N topSubDmd
-- | First argument of catch#: @1C1(U)@.
-- Evaluates its arg lazily, but then applies it exactly once to one argument.
lazyApply1Dmd :: Demand
-lazyApply1Dmd = C_01 :* Call C_01 topSubDmd
+lazyApply1Dmd = C_01 :* mkCall C_01 topSubDmd
-- | Second argument of catch#: @1C1(CS(U))@.
-- Calls its arg lazily, but then applies it exactly once to an additional argument.
lazyApply2Dmd :: Demand
-lazyApply2Dmd = C_01 :* Call C_01 (Call C_11 topSubDmd)
+lazyApply2Dmd = C_01 :* mkCall C_01 (mkCall C_11 topSubDmd)
-- | Make a 'Demand' evaluated at-most-once.
oneifyDmd :: Demand -> Demand
@@ -512,12 +524,12 @@ strictifyDictDmd ty (n :* Prod ds)
strictifyDictDmd _ dmd = dmd
-- | Wraps the 'SubDemand' with a one-shot call demand: @d@ -> @CS(d)@.
-mkCallDmd :: SubDemand -> SubDemand
-mkCallDmd sd = Call C_11 sd
+mkCalledOnceDmd :: SubDemand -> SubDemand
+mkCalledOnceDmd sd = mkCall C_11 sd
--- | @mkCallDmds n d@ returns @CS(CS...(CS d))@ where there are @n@ @CS@'s.
-mkCallDmds :: Arity -> SubDemand -> SubDemand
-mkCallDmds arity sd = iterate mkCallDmd sd !! arity
+-- | @mkCalledOnceDmds n d@ returns @CS(CS...(CS d))@ where there are @n@ @CS@'s.
+mkCalledOnceDmds :: Arity -> SubDemand -> SubDemand
+mkCalledOnceDmds arity sd = iterate mkCalledOnceDmd sd !! arity
-- | Peels one call level from the sub-demand, and also returns how many
-- times we entered the lambda body.
@@ -669,7 +681,7 @@ This is needed even for non-product types, in case the case-binder
is used but the components of the case alternative are not.
Note [Don't optimise UP(U,U,...) to U]
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
These two SubDemands:
UP(U,U) (@Prod [topDmd, topDmd]@) and U (@topSubDmd@)
are semantically equivalent, but we do not turn the former into
@@ -1571,9 +1583,9 @@ This is weird, so I'm not worried about whether this optimises brilliantly; but
it should not fall over.
-}
-zapUsageEnvSig :: StrictSig -> StrictSig
--- Remove the usage environment from the demand
-zapUsageEnvSig (StrictSig (DmdType _ ds r)) = mkClosedStrictSig ds r
+-- | Remove the demand environment from the signature.
+zapDmdEnvSig :: StrictSig -> StrictSig
+zapDmdEnvSig (StrictSig (DmdType _ ds r)) = mkClosedStrictSig ds r
zapUsageDemand :: Demand -> Demand
-- Remove the usage info, but not the strictness info, from the demand
@@ -1615,8 +1627,8 @@ kill_usage kfs (n :* sd) = kill_usage_card kfs n :* kill_usage_sd kfs sd
kill_usage_sd :: KillFlags -> SubDemand -> SubDemand
kill_usage_sd kfs (Call n sd)
- | kf_called_once kfs = Call (lubCard C_1N n) (kill_usage_sd kfs sd)
- | otherwise = Call n (kill_usage_sd kfs sd)
+ | kf_called_once kfs = mkCall (lubCard C_1N n) (kill_usage_sd kfs sd)
+ | otherwise = mkCall n (kill_usage_sd kfs sd)
kill_usage_sd kfs (Prod ds) = Prod (map (kill_usage kfs) ds)
kill_usage_sd _ sd = sd
@@ -1640,7 +1652,7 @@ trimToType (n :* sd) ts
where
go (Prod ds) (TsProd tss)
| equalLength ds tss = Prod (zipWith trimToType ds tss)
- go (Call n sd) (TsFun ts) = Call n (go sd ts)
+ go (Call n sd) (TsFun ts) = mkCall n (go sd ts)
go sd at Poly{} _ = sd
go _ _ = topSubDmd
@@ -1804,7 +1816,7 @@ instance Binary SubDemand where
h <- getByte bh
case h of
0 -> Poly <$> get bh
- 1 -> Call <$> get bh <*> get bh
+ 1 -> mkCall <$> get bh <*> get bh
2 -> Prod <$> get bh
_ -> pprPanic "Binary:SubDemand" (ppr (fromIntegral h :: Int))
=====================================
compiler/GHC/Types/Id/Info.hs
=====================================
@@ -650,7 +650,7 @@ zapUsageInfo info = Just (info {demandInfo = zapUsageDemand (demandInfo info)})
zapUsageEnvInfo :: IdInfo -> Maybe IdInfo
zapUsageEnvInfo info
| hasDemandEnvSig (strictnessInfo info)
- = Just (info {strictnessInfo = zapUsageEnvSig (strictnessInfo info)})
+ = Just (info {strictnessInfo = zapDmdEnvSig (strictnessInfo info)})
| otherwise
= Nothing
=====================================
distrib/mkDocs/mkDocs
=====================================
@@ -31,7 +31,9 @@ cd ..
tar -Jxf "$WINDOWS_BINDIST"
mv ghc* windows
cd inst/share/doc/ghc*/html/libraries
-mv ../../../../../../windows/doc/html/libraries/Win32-* .
+mv ../../../../../../windows/doc/html/libraries/Win32-* . || \ # make binary distribution
+ mv ../../../../../../windows/docs/html/libraries/Win32 . || \ # hadrian binary distribution
+ die "failed to find the Win32 package documentation"
sh gen_contents_index
cd ..
for i in haddock libraries users_guide
=====================================
hadrian/src/Rules/BinaryDist.hs
=====================================
@@ -12,6 +12,7 @@ import Settings
import Settings.Program (programContext)
import Target
import Utilities
+import qualified System.Directory.Extra as IO
{-
Note [Binary distributions]
@@ -136,13 +137,20 @@ bindistRules = do
copyDirectory (ghcBuildDir -/- "bin") bindistFilesDir
copyDirectory (ghcBuildDir -/- "lib") bindistFilesDir
copyDirectory (rtsIncludeDir) bindistFilesDir
+
unless cross $ need ["docs"]
+
-- TODO: we should only embed the docs that have been generated
-- depending on the current settings (flavours' "ghcDocs" field and
-- "--docs=.." command-line flag)
-- Currently we embed the "docs" directory if it exists but it may
-- contain outdated or even invalid data.
- whenM (doesDirectoryExist (root -/- "docs")) $ do
+
+ -- Use the IO version of doesDirectoryExist because the Shake Action
+ -- version should not be used for directories the build system can
+ -- create. Using the Action version caused documentation to not be
+ -- included in the bindist in the past (part of the problem in #18669).
+ whenM (liftIO (IO.doesDirectoryExist (root -/- "docs"))) $ do
copyDirectory (root -/- "docs") bindistFilesDir
when windowsHost $ do
copyDirectory (root -/- "mingw") bindistFilesDir
=====================================
testsuite/tests/arityanal/should_compile/Arity11.stderr
=====================================
@@ -35,7 +35,7 @@ end Rec }
-- RHS size: {terms: 52, types: 28, coercions: 0, joins: 0/5}
F11.$wfib [InlPrag=[2]] :: forall {a} {p}. (a -> a -> Bool) -> (Num a, Num p) => a -> p
-[GblId, Arity=4, Str=<MCM(CS(U))><UP(A,UCU(CS(U)),A,A,A,A,UCU(U))><UP(UCU(CS(U)),A,A,A,A,A,1C1(U))><U>, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 150 60 0] 460 0}]
+[GblId, Arity=4, Str=<MCM(CS(U))><UP(A,UCU(CS(U)),A,A,A,A,U)><UP(UCU(CS(U)),A,A,A,A,A,1C1(U))><U>, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 150 60 0] 460 0}]
F11.$wfib
= \ (@a) (@p) (ww :: a -> a -> Bool) (w :: Num a) (w1 :: Num p) (w2 :: a) ->
let {
@@ -73,7 +73,7 @@ F11.$wfib
fib [InlPrag=[2]] :: forall {a} {p}. (Eq a, Num a, Num p) => a -> p
[GblId,
Arity=4,
- Str=<SP(MCM(CS(U)),A)><UP(A,UCU(CS(U)),A,A,A,A,UCU(U))><UP(UCU(CS(U)),A,A,A,A,A,UCU(U))><U>,
+ Str=<SP(MCM(CS(U)),A)><UP(A,UCU(CS(U)),A,A,A,A,U)><UP(UCU(CS(U)),A,A,A,A,A,U)><U>,
Unf=Unf{Src=InlineStable, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(arity=4,unsat_ok=True,boring_ok=False)
Tmpl= \ (@a) (@p) (w [Occ=Once1!] :: Eq a) (w1 [Occ=Once1] :: Num a) (w2 [Occ=Once1] :: Num p) (w3 [Occ=Once1] :: a) -> case w of { GHC.Classes.C:Eq ww1 [Occ=Once1] _ [Occ=Dead] -> F11.$wfib @a @p ww1 w1 w2 w3 }}]
fib = \ (@a) (@p) (w :: Eq a) (w1 :: Num a) (w2 :: Num p) (w3 :: a) -> case w of { GHC.Classes.C:Eq ww1 ww2 -> F11.$wfib @a @p ww1 w1 w2 w3 }
=====================================
testsuite/tests/arityanal/should_compile/Arity16.stderr
=====================================
@@ -5,7 +5,7 @@ Result size of Tidy Core = {terms: 52, types: 87, coercions: 0, joins: 0/0}
Rec {
-- RHS size: {terms: 15, types: 17, coercions: 0, joins: 0/0}
map2 [Occ=LoopBreaker] :: forall {t} {a}. (t -> a) -> [t] -> [a]
-[GblId, Arity=2, Str=<UCU(U)><SU>, Unf=OtherCon []]
+[GblId, Arity=2, Str=<U><SU>, Unf=OtherCon []]
map2
= \ (@t) (@a) (f :: t -> a) (ds :: [t]) ->
case ds of {
=====================================
testsuite/tests/stranal/should_compile/T18894.hs
=====================================
@@ -0,0 +1,28 @@
+{-# OPTIONS_GHC -O2 -fforce-recomp #-}
+
+-- | The point of this test is that @g*@ get's a demand that says
+-- "whenever @g*@ is called, the second component of the pair is evaluated strictly".
+module T18894 (h1, h2) where
+
+g1 :: Int -> (Int,Int)
+g1 1 = (15, 0)
+g1 n = (2 * n, 2 `div` n)
+{-# NOINLINE g1 #-}
+
+h1 :: Int -> Int
+h1 1 = 0
+-- Sadly, the @g1 2@ subexpression will be floated to top-level, where we
+-- don't see the specific demand placed on it by @snd at . Tracked in #19001.
+h1 2 = snd (g1 2)
+h1 m = uncurry (+) (g1 m)
+
+g2 :: Int -> Int -> (Int,Int)
+g2 m 1 = (m, 0)
+g2 m n = (2 * m, 2 `div` n)
+{-# NOINLINE g2 #-}
+
+h2 :: Int -> Int
+h2 1 = 0
+h2 m
+ | odd m = snd (g2 m 2)
+ | otherwise = uncurry (+) (g2 2 m)
=====================================
testsuite/tests/stranal/should_compile/T18894.stderr
=====================================
@@ -0,0 +1,404 @@
+
+==================== Demand analysis ====================
+Result size of Demand analysis
+ = {terms: 177, types: 97, coercions: 0, joins: 0/0}
+
+-- RHS size: {terms: 1, types: 0, coercions: 0, joins: 0/0}
+$trModule :: GHC.Prim.Addr#
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 20 0}]
+$trModule = "main"#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+$trModule :: GHC.Types.TrName
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+$trModule = GHC.Types.TrNameS $trModule
+
+-- RHS size: {terms: 1, types: 0, coercions: 0, joins: 0/0}
+$trModule :: GHC.Prim.Addr#
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 30 0}]
+$trModule = "T18894"#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+$trModule :: GHC.Types.TrName
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+$trModule = GHC.Types.TrNameS $trModule
+
+-- RHS size: {terms: 3, types: 0, coercions: 0, joins: 0/0}
+T18894.$trModule :: GHC.Types.Module
+[LclIdX,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+T18894.$trModule = GHC.Types.Module $trModule $trModule
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl :: Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = GHC.Types.I# 0#
+
+-- RHS size: {terms: 36, types: 16, coercions: 0, joins: 0/0}
+g2 [InlPrag=NOINLINE, Dmd=UCU(CS(P(1P(U),SP(U))))]
+ :: Int -> Int -> (Int, Int)
+[LclId,
+ Arity=2,
+ Str=<UP(U)><SP(SU)>,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [20 20] 161 20}]
+g2
+ = \ (m [Dmd=UP(U)] :: Int) (ds [Dmd=SP(SU)] :: Int) ->
+ case ds of { GHC.Types.I# ds [Dmd=SU] ->
+ case ds of ds [Dmd=1U] {
+ __DEFAULT ->
+ (case m of { GHC.Types.I# y -> GHC.Types.I# (GHC.Prim.*# 2# y) },
+ case ds of wild {
+ __DEFAULT ->
+ case GHC.Classes.divInt# 2# wild of ww4 { __DEFAULT ->
+ GHC.Types.I# ww4
+ };
+ -1# -> GHC.Types.I# -2#;
+ 0# -> case GHC.Real.divZeroError of wild [Dmd=B] { }
+ });
+ 1# -> (m, lvl)
+ }
+ }
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl :: Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = GHC.Types.I# 2#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl :: Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = GHC.Types.I# 2#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl :: Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = GHC.Types.I# 0#
+
+-- RHS size: {terms: 36, types: 19, coercions: 0, joins: 0/0}
+h2 :: Int -> Int
+[LclIdX,
+ Arity=1,
+ Str=<SP(MU)>,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 162 10}]
+h2
+ = \ (ds [Dmd=SP(MU)] :: Int) ->
+ case ds of wild [Dmd=UP(U)] { GHC.Types.I# ds [Dmd=MU] ->
+ case ds of ds {
+ __DEFAULT ->
+ case GHC.Prim.remInt# ds 2# of {
+ __DEFAULT ->
+ case g2 wild lvl of { (ds1 [Dmd=A], y [Dmd=SU]) -> y };
+ 0# ->
+ case g2 lvl wild of { (x [Dmd=SP(U)], ds [Dmd=SP(U)]) ->
+ case x of { GHC.Types.I# x ->
+ case ds of { GHC.Types.I# y -> GHC.Types.I# (GHC.Prim.+# x y) }
+ }
+ }
+ };
+ 1# -> lvl
+ }
+ }
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl :: Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = GHC.Types.I# 15#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl :: Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = GHC.Types.I# 0#
+
+-- RHS size: {terms: 3, types: 2, coercions: 0, joins: 0/0}
+lvl :: (Int, Int)
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = (lvl, lvl)
+
+-- RHS size: {terms: 30, types: 11, coercions: 0, joins: 0/0}
+g1 [InlPrag=NOINLINE, Dmd=UCU(P(UP(U),UP(U)))] :: Int -> (Int, Int)
+[LclId,
+ Arity=1,
+ Str=<SP(SU)>,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 141 10}]
+g1
+ = \ (ds [Dmd=SP(SU)] :: Int) ->
+ case ds of { GHC.Types.I# ds [Dmd=SU] ->
+ case ds of ds {
+ __DEFAULT ->
+ (GHC.Types.I# (GHC.Prim.*# 2# ds),
+ case ds of wild {
+ __DEFAULT ->
+ case GHC.Classes.divInt# 2# wild of ww4 { __DEFAULT ->
+ GHC.Types.I# ww4
+ };
+ -1# -> GHC.Types.I# -2#;
+ 0# -> case GHC.Real.divZeroError of wild [Dmd=B] { }
+ });
+ 1# -> lvl
+ }
+ }
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl :: Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = GHC.Types.I# 0#
+
+-- RHS size: {terms: 3, types: 0, coercions: 0, joins: 0/0}
+lvl :: (Int, Int)
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=False, ConLike=False,
+ WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 30 0}]
+lvl = g1 (GHC.Types.I# 2#)
+
+-- RHS size: {terms: 28, types: 18, coercions: 0, joins: 0/0}
+h1 :: Int -> Int
+[LclIdX,
+ Arity=1,
+ Str=<SP(MU)>,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 111 10}]
+h1
+ = \ (ds [Dmd=SP(MU)] :: Int) ->
+ case ds of wild [Dmd=1P(1U)] { GHC.Types.I# ds [Dmd=MU] ->
+ case ds of {
+ __DEFAULT ->
+ case g1 wild of { (x [Dmd=SP(U)], ds [Dmd=SP(U)]) ->
+ case x of { GHC.Types.I# x ->
+ case ds of { GHC.Types.I# y -> GHC.Types.I# (GHC.Prim.+# x y) }
+ }
+ };
+ 1# -> lvl;
+ 2# -> case lvl of { (ds1 [Dmd=A], y [Dmd=SU]) -> y }
+ }
+ }
+
+
+
+
+==================== Demand analysis ====================
+Result size of Demand analysis
+ = {terms: 171, types: 120, coercions: 0, joins: 0/0}
+
+-- RHS size: {terms: 1, types: 0, coercions: 0, joins: 0/0}
+$trModule :: GHC.Prim.Addr#
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 20 0}]
+$trModule = "main"#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+$trModule :: GHC.Types.TrName
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+$trModule = GHC.Types.TrNameS $trModule
+
+-- RHS size: {terms: 1, types: 0, coercions: 0, joins: 0/0}
+$trModule :: GHC.Prim.Addr#
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 30 0}]
+$trModule = "T18894"#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+$trModule :: GHC.Types.TrName
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+$trModule = GHC.Types.TrNameS $trModule
+
+-- RHS size: {terms: 3, types: 0, coercions: 0, joins: 0/0}
+T18894.$trModule :: GHC.Types.Module
+[LclIdX,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+T18894.$trModule = GHC.Types.Module $trModule $trModule
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl :: Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = GHC.Types.I# 0#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl :: Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = GHC.Types.I# -2#
+
+-- RHS size: {terms: 32, types: 18, coercions: 0, joins: 0/0}
+$wg2 [InlPrag=NOINLINE, Dmd=UCU(CS(P(1P(U),SP(U))))]
+ :: Int -> GHC.Prim.Int# -> (# Int, Int #)
+[LclId,
+ Arity=2,
+ Str=<UP(U)><SU>,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [20 30] 121 20}]
+$wg2
+ = \ (w [Dmd=UP(U)] :: Int) (ww [Dmd=SU] :: GHC.Prim.Int#) ->
+ case ww of ds {
+ __DEFAULT ->
+ (# case w of { GHC.Types.I# y -> GHC.Types.I# (GHC.Prim.*# 2# y) },
+ case ds of {
+ __DEFAULT ->
+ case GHC.Classes.divInt# 2# ds of ww4 { __DEFAULT ->
+ GHC.Types.I# ww4
+ };
+ -1# -> lvl;
+ 0# -> case GHC.Real.divZeroError of wild [Dmd=B] { }
+ } #);
+ 1# -> (# w, lvl #)
+ }
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl :: Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = GHC.Types.I# 2#
+
+-- RHS size: {terms: 34, types: 21, coercions: 0, joins: 0/0}
+$wh2 [InlPrag=[2]] :: GHC.Prim.Int# -> Int
+[LclId,
+ Arity=1,
+ Str=<SU>,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 162 10}]
+$wh2
+ = \ (ww [Dmd=SU] :: GHC.Prim.Int#) ->
+ case ww of ds {
+ __DEFAULT ->
+ case GHC.Prim.remInt# ds 2# of {
+ __DEFAULT ->
+ case $wg2 (GHC.Types.I# ds) 2# of
+ { (# ww [Dmd=A], ww [Dmd=SU] #) ->
+ ww
+ };
+ 0# ->
+ case $wg2 lvl ds of { (# ww [Dmd=SP(U)], ww [Dmd=SP(U)] #) ->
+ case ww of { GHC.Types.I# x ->
+ case ww of { GHC.Types.I# y -> GHC.Types.I# (GHC.Prim.+# x y) }
+ }
+ }
+ };
+ 1# -> lvl
+ }
+
+-- RHS size: {terms: 6, types: 3, coercions: 0, joins: 0/0}
+h2 [InlPrag=[2]] :: Int -> Int
+[LclIdX,
+ Arity=1,
+ Str=<SP(SU)>,
+ Unf=Unf{Src=InlineStable, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True,
+ Guidance=ALWAYS_IF(arity=1,unsat_ok=True,boring_ok=False)
+ Tmpl= \ (w [Occ=Once1!] :: Int) ->
+ case w of { GHC.Types.I# ww [Occ=Once1, Dmd=MU] -> $wh2 ww }}]
+h2
+ = \ (w [Dmd=SP(SU)] :: Int) ->
+ case w of { GHC.Types.I# ww [Dmd=SU] -> $wh2 ww }
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl :: Int
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = GHC.Types.I# 15#
+
+-- RHS size: {terms: 28, types: 15, coercions: 0, joins: 0/0}
+$wg1 [InlPrag=NOINLINE, Dmd=UCU(P(UP(U),UP(U)))]
+ :: GHC.Prim.Int# -> (# Int, Int #)
+[LclId,
+ Arity=1,
+ Str=<SU>,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 111 20}]
+$wg1
+ = \ (ww [Dmd=SU] :: GHC.Prim.Int#) ->
+ case ww of ds {
+ __DEFAULT ->
+ (# GHC.Types.I# (GHC.Prim.*# 2# ds),
+ case ds of {
+ __DEFAULT ->
+ case GHC.Classes.divInt# 2# ds of ww4 { __DEFAULT ->
+ GHC.Types.I# ww4
+ };
+ -1# -> lvl;
+ 0# -> case GHC.Real.divZeroError of wild [Dmd=B] { }
+ } #);
+ 1# -> (# lvl, lvl #)
+ }
+
+-- RHS size: {terms: 7, types: 9, coercions: 0, joins: 0/0}
+lvl :: (Int, Int)
+[LclId,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=False, ConLike=False,
+ WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 40 10}]
+lvl = case $wg1 2# of { (# ww, ww #) -> (ww, ww) }
+
+-- RHS size: {terms: 25, types: 18, coercions: 0, joins: 0/0}
+$wh1 [InlPrag=[2]] :: GHC.Prim.Int# -> Int
+[LclId,
+ Arity=1,
+ Str=<SU>,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True, Guidance=IF_ARGS [50] 101 10}]
+$wh1
+ = \ (ww [Dmd=SU] :: GHC.Prim.Int#) ->
+ case ww of ds [Dmd=1U] {
+ __DEFAULT ->
+ case $wg1 ds of { (# ww [Dmd=SP(U)], ww [Dmd=SP(U)] #) ->
+ case ww of { GHC.Types.I# x ->
+ case ww of { GHC.Types.I# y -> GHC.Types.I# (GHC.Prim.+# x y) }
+ }
+ };
+ 1# -> lvl;
+ 2# -> case lvl of { (ds1 [Dmd=A], y [Dmd=SU]) -> y }
+ }
+
+-- RHS size: {terms: 6, types: 3, coercions: 0, joins: 0/0}
+h1 [InlPrag=[2]] :: Int -> Int
+[LclIdX,
+ Arity=1,
+ Str=<SP(SU)>,
+ Unf=Unf{Src=InlineStable, TopLvl=True, Value=True, ConLike=True,
+ WorkFree=True, Expandable=True,
+ Guidance=ALWAYS_IF(arity=1,unsat_ok=True,boring_ok=False)
+ Tmpl= \ (w [Occ=Once1!] :: Int) ->
+ case w of { GHC.Types.I# ww [Occ=Once1, Dmd=MU] -> $wh1 ww }}]
+h1
+ = \ (w [Dmd=SP(SU)] :: Int) ->
+ case w of { GHC.Types.I# ww [Dmd=SU] -> $wh1 ww }
+
+
+
=====================================
testsuite/tests/stranal/should_compile/T18894b.hs
=====================================
@@ -0,0 +1,20 @@
+{-# OPTIONS_GHC -O -fno-call-arity -fforce-recomp #-}
+
+module T18894 (f) where
+
+expensive :: Int -> (Int, Int)
+expensive n = (n+1, n+2)
+{-# NOINLINE expensive #-}
+
+-- arity 1 by itself, but not exported, thus can be eta-expanded based on usage
+eta :: Int -> Int -> Int
+eta x = if fst (expensive x) == 13
+ then \y -> x + y
+ else \y -> x * y
+{-# NOINLINE eta #-}
+
+f :: Int -> Int
+f 1 = 0
+f m
+ | odd m = eta m 2
+ | otherwise = eta 2 m
=====================================
testsuite/tests/stranal/should_compile/T18894b.stderr
=====================================
@@ -0,0 +1,187 @@
+
+==================== Demand analysis ====================
+Result size of Demand analysis = {terms: 83, types: 40, coercions: 0, joins: 0/0}
+
+-- RHS size: {terms: 1, types: 0, coercions: 0, joins: 0/0}
+$trModule :: GHC.Prim.Addr#
+[LclId, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 20 0}]
+$trModule = "main"#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+$trModule :: GHC.Types.TrName
+[LclId, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+$trModule = GHC.Types.TrNameS $trModule
+
+-- RHS size: {terms: 1, types: 0, coercions: 0, joins: 0/0}
+$trModule :: GHC.Prim.Addr#
+[LclId, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 30 0}]
+$trModule = "T18894"#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+$trModule :: GHC.Types.TrName
+[LclId, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+$trModule = GHC.Types.TrNameS $trModule
+
+-- RHS size: {terms: 3, types: 0, coercions: 0, joins: 0/0}
+T18894.$trModule :: GHC.Types.Module
+[LclIdX, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+T18894.$trModule = GHC.Types.Module $trModule $trModule
+
+-- RHS size: {terms: 16, types: 7, coercions: 0, joins: 0/0}
+expensive [InlPrag=NOINLINE, Dmd=UCU(P(SP(SU),A))] :: Int -> (Int, Int)
+[LclId,
+ Arity=1,
+ Str=<UP(U)>,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [40] 52 10}]
+expensive
+ = \ (n [Dmd=UP(U)] :: Int) ->
+ (case n of { GHC.Types.I# x -> GHC.Types.I# (GHC.Prim.+# x 1#) }, case n of { GHC.Types.I# x -> GHC.Types.I# (GHC.Prim.+# x 2#) })
+
+-- RHS size: {terms: 20, types: 11, coercions: 0, joins: 0/0}
+eta [InlPrag=NOINLINE, Dmd=UCU(CS(U))] :: Int -> Int -> Int
+[LclId,
+ Arity=1,
+ Str=<UP(U)>,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [0] 140 120}]
+eta
+ = \ (x [Dmd=UP(U)] :: Int) ->
+ case expensive x of { (x [Dmd=SP(SU)], ds1 [Dmd=A]) ->
+ case x of { GHC.Types.I# x [Dmd=SU] ->
+ case x of {
+ __DEFAULT -> \ (y [Dmd=SP(U)] :: Int) -> GHC.Num.$fNumInt_$c* x y;
+ 13# -> \ (y [Dmd=SP(U)] :: Int) -> GHC.Num.$fNumInt_$c+ x y
+ }
+ }
+ }
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl :: Int
+[LclId, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = GHC.Types.I# 2#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl :: Int
+[LclId, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = GHC.Types.I# 2#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl :: Int
+[LclId, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = GHC.Types.I# 0#
+
+-- RHS size: {terms: 21, types: 5, coercions: 0, joins: 0/0}
+f :: Int -> Int
+[LclIdX,
+ Arity=1,
+ Str=<SP(MU)>,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 111 0}]
+f = \ (ds [Dmd=SP(MU)] :: Int) ->
+ case ds of wild [Dmd=UP(U)] { GHC.Types.I# ds [Dmd=MU] ->
+ case ds of ds {
+ __DEFAULT ->
+ case GHC.Prim.remInt# ds 2# of {
+ __DEFAULT -> eta wild lvl;
+ 0# -> eta lvl wild
+ };
+ 1# -> lvl
+ }
+ }
+
+
+
+
+==================== Demand analysis ====================
+Result size of Demand analysis = {terms: 85, types: 47, coercions: 0, joins: 0/0}
+
+-- RHS size: {terms: 1, types: 0, coercions: 0, joins: 0/0}
+$trModule :: GHC.Prim.Addr#
+[LclId, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 20 0}]
+$trModule = "main"#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+$trModule :: GHC.Types.TrName
+[LclId, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+$trModule = GHC.Types.TrNameS $trModule
+
+-- RHS size: {terms: 1, types: 0, coercions: 0, joins: 0/0}
+$trModule :: GHC.Prim.Addr#
+[LclId, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 30 0}]
+$trModule = "T18894"#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+$trModule :: GHC.Types.TrName
+[LclId, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+$trModule = GHC.Types.TrNameS $trModule
+
+-- RHS size: {terms: 3, types: 0, coercions: 0, joins: 0/0}
+T18894.$trModule :: GHC.Types.Module
+[LclIdX, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+T18894.$trModule = GHC.Types.Module $trModule $trModule
+
+-- RHS size: {terms: 16, types: 9, coercions: 0, joins: 0/0}
+$wexpensive [InlPrag=NOINLINE, Dmd=UCU(P(SP(SU),A))] :: Int -> (# Int, Int #)
+[LclId,
+ Arity=1,
+ Str=<UP(U)>,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [40] 42 10}]
+$wexpensive
+ = \ (w [Dmd=UP(U)] :: Int) ->
+ (# case w of { GHC.Types.I# x -> GHC.Types.I# (GHC.Prim.+# x 1#) },
+ case w of { GHC.Types.I# x -> GHC.Types.I# (GHC.Prim.+# x 2#) } #)
+
+-- RHS size: {terms: 19, types: 12, coercions: 0, joins: 0/0}
+eta [InlPrag=NOINLINE, Dmd=UCU(CS(U))] :: Int -> Int -> Int
+[LclId,
+ Arity=2,
+ Str=<MP(U)><SP(U)>,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [0 0] 120 0}]
+eta
+ = \ (x [Dmd=MP(U)] :: Int) (eta [Dmd=SP(U), OS=OneShot] :: Int) ->
+ case $wexpensive x of { (# ww [Dmd=SP(SU)], ww [Dmd=A] #) ->
+ case ww of { GHC.Types.I# x [Dmd=SU] ->
+ case x of {
+ __DEFAULT -> GHC.Num.$fNumInt_$c* x eta;
+ 13# -> GHC.Num.$fNumInt_$c+ x eta
+ }
+ }
+ }
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl :: Int
+[LclId, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = GHC.Types.I# 2#
+
+-- RHS size: {terms: 2, types: 0, coercions: 0, joins: 0/0}
+lvl :: Int
+[LclId, Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 10}]
+lvl = GHC.Types.I# 0#
+
+-- RHS size: {terms: 20, types: 3, coercions: 0, joins: 0/0}
+$wf [InlPrag=[2]] :: GHC.Prim.Int# -> Int
+[LclId,
+ Arity=1,
+ Str=<SU>,
+ Unf=Unf{Src=<vanilla>, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 121 0}]
+$wf
+ = \ (ww [Dmd=SU] :: GHC.Prim.Int#) ->
+ case ww of ds {
+ __DEFAULT ->
+ case GHC.Prim.remInt# ds 2# of {
+ __DEFAULT -> eta (GHC.Types.I# ds) lvl;
+ 0# -> eta lvl (GHC.Types.I# ds)
+ };
+ 1# -> lvl
+ }
+
+-- RHS size: {terms: 6, types: 3, coercions: 0, joins: 0/0}
+f [InlPrag=[2]] :: Int -> Int
+[LclIdX,
+ Arity=1,
+ Str=<SP(SU)>,
+ Unf=Unf{Src=InlineStable, TopLvl=True, Value=True, ConLike=True, WorkFree=True, Expandable=True,
+ Guidance=ALWAYS_IF(arity=1,unsat_ok=True,boring_ok=False)
+ Tmpl= \ (w [Occ=Once1!] :: Int) -> case w of { GHC.Types.I# ww [Occ=Once1, Dmd=MU] -> $wf ww }}]
+f = \ (w [Dmd=SP(SU)] :: Int) -> case w of { GHC.Types.I# ww [Dmd=SU] -> $wf ww }
+
+
+
=====================================
testsuite/tests/stranal/should_compile/all.T
=====================================
@@ -58,3 +58,7 @@ test('T18122', [ grep_errmsg(r'wfoo =') ], compile, ['-ddump-simpl'])
# We care about the call demand on $wg
test('T18903', [ grep_errmsg(r'Dmd=\S+C\S+') ], compile, ['-ddump-simpl -dsuppress-uniques'])
+# We care about the call demand on $wg1 and $wg2
+test('T18894', [ grep_errmsg(r'Dmd=\S+C\S+') ], compile, ['-ddump-stranal -dsuppress-uniques'])
+# We care about the Arity 2 on eta, as a result of the annotated Dmd
+test('T18894b', [ grep_errmsg(r'Arity=2') ], compile, ['-ddump-stranal -dsuppress-uniques -fno-call-arity -dppr-cols=200'])
=====================================
testsuite/tests/stranal/sigs/T5075.stderr
=====================================
@@ -1,7 +1,7 @@
==================== Strictness signatures ====================
T5075.$trModule:
-T5075.loop: <MP(A,A,MCM(CS(U)),A,A,A,A,A)><UP(A,A,UCU(CS(U)),A,A,A,UCU(U))><U>
+T5075.loop: <MP(A,A,MCM(CS(U)),A,A,A,A,A)><UP(A,A,UCU(CS(U)),A,A,A,U)><U>
@@ -13,6 +13,6 @@ T5075.loop:
==================== Strictness signatures ====================
T5075.$trModule:
-T5075.loop: <SP(A,A,MCM(CS(U)),A,A,A,A,A)><UP(A,A,UCU(CS(U)),A,A,A,UCU(U))><U>
+T5075.loop: <SP(A,A,MCM(CS(U)),A,A,A,A,A)><UP(A,A,UCU(CS(U)),A,A,A,U)><U>
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/05d4b26bcaaa34f5506440f00aee191292bc244a...9cc868bfd7ed77ffcc191028c5f57c97df634b79
--
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/05d4b26bcaaa34f5506440f00aee191292bc244a...9cc868bfd7ed77ffcc191028c5f57c97df634b79
You're receiving this email because of your account on gitlab.haskell.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-commits/attachments/20201213/eea8639d/attachment-0001.html>
More information about the ghc-commits
mailing list